anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Virality Pro: 95% reduced content production costs, 2.5x rate of going viral, 4 high ticket clients We’re already helping companies go viral on instagram & TikTok, slash the need for large ad spend, and propel unparalleled growth at a 20x lower price. ## The problem: growing a company is **HARD and EXPENSIVE** Here are the current ways companies grow reliably: 1. **Facebook ads / Google Ads**: Expensive Paid Ads Producing ads often cost $2K - $10K+ Customer acquisition cost on Facebook can be as much as $100+, with clicks being as high as $10 on google ads Simply untenable for lower-ticket products 2. **Organic Social Media**: Slow growth Takes a long time and can be unreliable; some brands just cannot grow Content production, posting, and effective social media management is expensive Low engagement rates even at 100K+ followers, and hard to stay consistent ## Solution: Going viral with Virality Pro, Complete Done-For-You Viral Marketing Brands and startups need the potential for explosive growth without needing to spend $5K+ on marketing agencies, $20K+ on ad spend, and getting a headache hiring and managing middle management. We take care of everything so that you just give us your company name and product, and we manage everything from there. The solution: **viral social media content at scale**. Using our AI-assisted system, we can produce content following the form of proven viral videos at scale for brands to enable **consistent** posting with **rapid** growth. ## Other brands: Spends $5K to produce an ad, $20K on ad spend. They have extremely thin margins with unprofitable growth. ## With Virality Pro: $30-50 per video, 0 ad spend, produced reliably for fast viral growth Professional marketers and marketing agencies cost hundreds of thousands of dollars per year. With Virality Pro, we can churn out **400% more content for 5 times less.** This content can easily get 100,000+ views on tik tok and instagram for under $1000, while the same level of engagement would cost 20x more traditionally. ## Startups, Profitable Companies, and Brands use Virality Pro to grow Our viral videos drive growth for early to medium-sized startups and companies, providing them a lifeline to expand rapidly. ## 4 clients use Virality Pro and are working with us for growth 1. **Minute Land** is looking to use Virality Pro to consistently produce ads, scaling to **$400K+** through viral videos off $0 in ad spend 2. **Ivy Roots Consulting** is looking to use Virality Pro to scale their college consulting business in a way that is profitable **without the need for VC money**. Instead of $100 CAC through paid ads, the costs with Virality Pro are close to 0 at scale. 3. **Manifold** is looking to use Virality Pro to go viral on social media over and over again to promote their new products without needing to hire a marketing department 4. **Yoodli** is looking to use Virality Pro to manage rapid social media growth on TikTok/Instagram without the need to expend limited funding for hiring middle managers and content producers to take on headache-inducing media projects ## Our team: Founders with multiple exits, Stanford CS+Math, University of Cambridge engineers Our team consists of the best of the best, including Stanford CS/Math experts with Jane Street experience, founders with multiple large-scale exits multiple times, Singaporean top engineers making hundreds of thousands of dollars through past ventures, and a Cambridge student selected as the top dozen computer scientists in the entire UK. ## Business Model Our pricing system charges $1900 per month for our base plan (5 videos per week), with our highest value plan being $9500 per month (8 videos per day). With our projected goal of 100 customers within the next 6 months, we can make $400K in MRR with the average client paying $4K per month. ## How our system works Our technology is split into two sectors: semi-automated production and fully-automated production. Currently, our main offer is semi-automated production, with the fully-automated content creation sequence still in production. ## Semi-Automated AI-Powered Production Technology We utilize a series of templates built around prompt engineering and fine-tuning models to create a large variety of content for companies around a single format. We then scale the number of templates currently available to be able to produce hundreds and thousands of videos for a single brand off of many dozens of formats, each with the potential to go viral (having gone viral in the past). ## Creating the scripts and audios Our template system uses AI to produce the scripts and the on-screen text, which is then fed into a database system. Here, a marketing expert verifies these scripts and adjusts them to improve its viral nature. For each template, a series of seperate audios are given as options and scripts are built around it. ## Sourcing Footage For each client, we source a large database of footage found through filmed clips, AI-generated video, motion-graphic images, and taking large videos on youtube and using software to break them down into small clips, each representing a shot. ## Text to Speech We use realistic-sounding AI voices and default AI voices to power the audio. This has proven to work in the past and can be produced consistently at scale. ## Stitching it all together Using our system, we then compile the footage, text script, and audio into one streamlined sequence, after which it can be reviewed and posted onto social media. ## All done within 5 to 15 minutes per video Instead of taking hours, we can get it done in **5 to 15 minutes**, which we are continuing to shave down. ## Fully Automated System Our fully automated system is a work in progress that removes the need for human interaction and fully automates the video production, text creation, and other components, stitched together without the need for anyone to be involved in the process. ## Building the Fully Automated AI System Our project was built employing Reflex for web development, OpenAI for language model integration, and DALL-E for image generation. Utilizing Prompt Engineering alongside FFmpeg, we synthesized relevant images to enhance our business narrative. ## Challenges Faced Challenges encountered included slow Wi-Fi, the steep learning curve with Prompt Engineering and adapting to Reflex, diverging from conventional frameworks like React or Next.js for web application development. ## Future of Virality Pro We are continuing to innovate our fully-automated production system and create further templates for our semi-automated systems. We hope that we can reduce the costs of production on our backend and increase the growth. ## Projections We project to scale to 100 clients in 6 months to produce $400K in Monthly Recurring Revenue, and within a year, scale to 500 clients for $1.5M in MRR.
## Inspiration Nowadays, the payment for knowledge has become more acceptable by the public, and people are more willing to pay to these truly insightful, cuttting edge, and well-stuctured knowledge or curriculum. However, current centalized video content production platforms (like YouTube, Udemy, etc.) take too much profits from the content producers (resaech have shown that content creators usually only receive 15% of the values their contents create) and the values generated from the video are not distributed in a timely manner. In order to tackle this unfair value distribution, we have this decentralized platform EDU.IO where the video contents will be backed by their digital assets as an NFT (copyright protection!) and fractionalized as tokens, and it creates direct connections between content creators and viewers/fans (no middlemen anymore!), maximizing the value of the contents made by creators. ## What it does EDU.IO is a decentralized educational video streaming media platform & fractionalized NFT exchange that empowers creator economy and redefines knowledge value distribution via smart contracts. * As an educational hub, EDU.IO is a decentralized platform of high-quality educational videos on disruptive innovations and hot topics like metaverse, 5G, IoT, etc. * As a booster of creator economy, once a creator uploads a video (or course series), it will be mint as an NFT (with copyright protection) and fractionalizes to multiple tokens. Our platform will conduct a mini-IPO for the each content they produced - bid for fractionalized NFTs. The value of each video token is determined by the number of views over a certain time interval, and token owners (can be both creators and viewers/fans/investors) can advertise the contents they owned to increase it values, and trade these tokens to earn monkey or make other investments (more liquidity!!). * By the end of the week, the value generated by each video NFT will be distributed via smart contracts to the copyright / fractionalized NFT owners of each video. Overall we’re hoping to build an ecosystem with more engagement between viewers and content creators, and our three main target users are: * 1. Instructors or Content creators: where the video contents can get copyright protection via NFT, and they can get fairer value distribution and more liquidity compare to using large centralized platforms * 2. Fans or Content viewers: where they can directly interact and support content creators, and the fee will be sent directly to the copyright owners via smart contract. * 3. Investors: Lower barrier of investment, where everyone can only to a fragment of a content. People can also to bid or trading as a secondary market. ## How we built it * Frontend in HTML, CSS, SCSS, Less, React.JS * Backend in Express.JS, Node.JS * ELUV.IO for minting video NFTs (eth-based) and for playing quick streaming videos with high quality & low latency * CockroachDB (a distributed SQL DB) for storing structured user information (name, email, account, password, transactions, balance, etc.) * IPFS & Filecoin (distributed protocol & data storage) for storing video/course previews (decentralization & anti-censorship) ## Challenges we ran into * Transition from design to code * CockroachDB has an extensive & complicated setup, which requires other extensions and stacks (like Docker) during the set up phase which caused a lot of problems locally on different computers. * IPFS initially had set up errors as we had no access to the given ports → we modified the original access files to access different ports to get access. * Error in Eluv.io’s documentation, but the Eluv.io mentor was very supportive :) * Merging process was difficult when we attempted to put all the features (Frontend, IPFS+Filecoin, CockroachDB, Eluv.io) into one ultimate full-stack project as we worked separately and locally * Sometimes we found the documentation hard to read and understand - in a lot of problems we encountered, the doc/forum says DO this rather then RUN this, where the guidance are not specific enough and we had to spend a lot of extra time researching & debugging. Also since not a lot of people are familiar with the API therefore it was hard to find exactly issues we faced. Of course, the staff are very helpful and solved a lot of problems for us :) ## Accomplishments that we're proud of * Our Idea! Creative, unique, revolutionary. DeFi + Education + Creator Economy * Learned new technologies like IPFS, Filecoin, Eluv.io, CockroachDB in one day * Successful integration of each members work into one big full-stack project ## What we learned * More in depth knowledge of Cryptocurrency, IPFS, NFT * Different APIs and their functionalities (strengths and weaknesses) * How to combine different subparts with different functionalities into a single application in a project * Learned how to communicate efficiently with team members whenever there is a misunderstanding or difference in opinion * Make sure we know what is going on within the project through active communications so that when we detect a potential problem, we solve it right away instead of wait until it produces more problems * Different hashing methods that are currently popular in “crypto world” such as multihash with cid, IPFS’s own hashing system, etc. All of which are beyond our only knowledge of SHA-256 * The awesomeness of NFT fragmentation, we believe it has great potential in the future * Learned the concept of a decentralized database which is directly opposite the current data bank structure that most of the world is using ## What's next for EDU.IO * Implement NFT Fragmentation (fractionalized tokens) * Improve the trading and secondary market by adding more feature like more graphs * Smart contract development in solidity for value distribution based on the fractionalized tokens people owned * Formulation of more complete rules and regulations - The current trading prices of fractionalized tokens are based on auction transactions, and eventually we hope it can become a free secondary market (just as the stock market)
## Inspiration As students, we get into daily routines. Here are two of them: walking from our classes and waking up to a Cal WarnMe email from a tragedy occurring right outside our building. This is not the way we envisioned our new home, and we, therefore, sought a solution to mitigate this topical problem. We were inspired by the Google Maps interface and Waze crowdsourced traffic-volume detection to create an app that prioritizes both the time and safety of our peers. ## What it does SafePaths is a route-generation application that optimizes the time and safety of its users. Route Generation: No matter where you are, you can get anywhere through Berkeley with your close friend SafePaths by your side, covering you in many situations. Late night frat party? SafePaths will take you home, sending you through the most time-efficient paths with the fewest crimes during the night. Lost in downtown Berkeley? SafePaths knows where you are and also knows how you can get home without going through an unknown area that could put you at risk. Path Prioritization: With our app, we can not only tell you how to get home but also the safest way to do it. Every crime reported is ranked by severity to help assess the level of risk. High-Crime Area Reports: Accidentally entering a dangerous area, no problem! We will automatically alert you to the risk and promptly redirect you to home sweet home. ## How we built it Our Stack: We used Flutter and Dart for our front end, Node.js, Express.js, AWS Lambda, and CockroachDB for our back end, and Python, BeautifulSoup, Cohere.AI, and Flask for our AI/ML and data processing pipeline. Front End: We used Flutter for the front end. We incorporated Google Maps API to create the routes and the map needed for functionality. There is also a loading screen with our company logo. Back End: We first created a Node.js and Express.js application to test our CockroachDB locally. Once our read and write capabilities were finished, we deployed our local server on AWS Lambda with serverless and tested with Postman. To transfer our server to AWS, we used AWS S3 to store our files and created three lambda functions for each function (write, read, and remove). We then use our write API to transfer the data gathered from the AI/ML & data processing pipeline to CockroachDB. The front end gains access to the geolocation information in the database through the read API, and the users crowdsource crime data using the write API. AI/ML & Data Processing Pipeline: We first scrape Berkeley Police Department’s website using BeautifulSoup for a list of links that each correspond to a pdf with crime information each day. We go through each scraped link’s corresponding pdf and save tabulated tables with crime and location information. After preprocessing the crime and location information, we use Cohere.AI’s classification pipeline to train a supervised NLP model on hand-labeled data indicating the severity of the crime committed. We use Google Maps API to return longitude and latitude pairs for each crime location and push it via a POST request to CockroachDB using Flask. ## Challenges we ran into Transferring our local server to AWS took a considerable amount of time. Due to the tricky nature of async functions and unfamiliarity with Node.js syntax, deploying to AWS took many attempts. We eventually were able to solve our numerous bugs with help from mentors (shoutout to Krishna). We also faced challenges with scraping regularly-updated crime data. Although there was an online platform that had information in a tabular format, it was protected from web scraping by a backend layer. Thus, we were forced to execute a significantly more difficult scraping task than expected. We also faced initial challenges with using Cohere.AI, but solved them by using larger language models and hand-labeling more training data. ## Accomplishments that we're proud of We are proud of venturing out of our comfort zone and learning new technologies such as Node.js, CockroachDB, Flask, and Cohere.AI. We chose a problem that was a significant technical challenge and are proud to have individually executed our parts in the full-stack that has allowed us to put together a fully functional application. ## What we learned In this project, we learned how to properly make plans and split our workload based on skill set. Completing this app individually would have been a challenge for all of us. With two members familiar with AI, one member familiar with back-end and servers, and the other a front-end wizard, we became one cohesive unit, learning each portion of the full stack from each other. ## What's next for SafePaths Initial Deployment: Our next step for SafePaths is incorporating a crowdsourcing-based “report-crime” function and testing it on the Berkeley campus. After we field-test the product, we plan on launching it on the AppStore Google Play, expanding our scope to not just Berkeley but country-wide. Additional Features: We understand that the app is in its beta form, and here are some ideas we plan on implementing (we would love to hear feedback too!) An emergency call, so if in danger, a simple button could be clicked Location sharing with friends, so you know the paths they walk Specific icons per reported crime (fire for arson or money for robbery) Crime prediction in areas based on time of year and day
winning
## Inspiration As part of my school's creative writing club, we sometimes play some writing games to help our creativity flow, and CollaboWrite lets us participate in one of my favourites. Starting with a prompt, each user gets to write a single sentence of a story. When they've finished a sentence, control gets passed onto the next user, who only gets to see the previous sentence written in the story. By the end, a story is created as a result of the fun and collaborative efforts of everybody in the room. ## How we built it We used a Node.js backend to register users, track active rooms, and save stories. WebSockets allowed us to maintain active connections between each of the clients so that actions taken during the collaborative effort would feel more instantaneous. We both had experience with iOS dev, so we decided to being with iOS, and leave Android to the end if we had time. Unfortunately, we didn't! ## Challenges we ran into The workload ended up being too much for us. We finished a number of the features separately on the front and back end, but didn't have time to connect the two. ## Accomplishments that we're proud of Getting WebSockets working and learning how to effectively maintain a number of open connections. ## What we learned Don't bite off more than you can chew, and try to pick an idea to work on EARLY.
## Inspiration The inspiration behind FightBite originated from the Brave Wilderness youtube channel, particularly the [Bites and Stings series](https://www.youtube.com/watch?v=SMJHJ0i86ts&list=PLbfmhGxamZ83v9OKDa4eV_IlY2W-PLK6X). When watching the series, we were terrified by the amount of destruction that could be caused by such minuscule beings. We were also inspired by the overwhelming 725,000 yearly deaths from mosquito-borne diseases. As a group, we decided to think of a solution, and this solution eventually became FightBite. ## What it does FightBite is a modular and interactive phone application that allows users to quickly and effectively take a picture of a bug bite or take an existing picture and get instant feedback on the type of bug bite, whether it be a mosquito, tick, or even bed-bug bite. In addition to detecting the bug type, FightBite also pulls up the relevant medical information for bite treatment. To use FightBite, simply tap on the start button, and then choose to either take a picture from the phone’s camera, or directly pick the bug bite from the gallery! Once an image has been selected, the user has the option of saving the image for future reference, discarding it and selecting a new image, or if they are satisfied, scanning the picture with our own AI for bite analysis. ## How we built it As FightBite is a phone based application, we decided to use react native for our front end design. As we intend for FightBite to work on both IOS, and android operating systems, react native allows us to write a single codebase that renders to native platforms, saving us the problem of creating two separate applications. Our neural network was created and trained with Pytorch, and was built on top of the DenseNet121 model. We then used transfer learning in order to adapt this pretrained network to our own problem. Finally, we created an API Endpoint with Flask and deployed it with Heroku. ## Challenges we ran into Over the past 48 hours, we faced various issues, mainly relating to the overall setup of react native and its many modules that we implemented. As this is our first time creating a phone application using React Native, we first had to take time to learn the documentation from scratch. Furthermore, we ran into issues regarding react native camera being deprecated due to lack of maintenance, so we were forced to use expo-camera instead, causing many delays. In addition to front end issues, we did not have any major access to a pre-existing dataset, so the majority of our data was compiled manually. This led to the size of our dataset being limited, which hurt the training of the model greatly. ## Accomplishments that we're proud of After completing HackThe6ix, our team is extremely proud that we managed to create our first ever functioning full stack mobile application in less than 48 hours. Although our team had some experience with web development such as HTML, CSS and Javascript, we never worked with react native before, so being able to implement a new language in creating a fully functional phone application is a huge accomplishment to our learning. Furthermore, this is our first ever “real” machine learning project with Pytorch, and we are extremely proud that we were able to build and deploy a machine learning model within 36 hours. ## What we learned We have learned many new skills after participating in HackThe6ix. Mainly, we learned more about phone app development through using React Native, and developed the ability to create an aesthetically pleasing application for both IPhone and Android devices. In addition, we also learned a lot about preparing and collecting data, along with training and evaluating a machine learning model. We also further enhanced our capabilities with Flask, as our team had very little experience with the framework coming into Hack The 6ix. ## What's next for FightBite Our first priority of FightBite is to ultimately expand our dataset, with more bug bites, and more images per type of bite, in order to quickly and accurately diagnose bug bites, for faster treatment and recovery. In particular, we plan to add some of the more deadly variants of bug bites(like black widow bites, brown recluse spider bites) in order to save as many lives as possible before it becomes too late. We also hope to add more depth to our bite analysis, like detecting potential diseases(an example of such could be detecting skin-lesions in tick bites like [here](https://arxiv.org/abs/2011.11459?context=cs.CV).
## Inspiration Why are there so few women in STEM? How can we promote female empowerment within STEM? As two of us are women, we have personally faced misogyny within the tech community--from being told women aren't fit for certain responsibilities, to our accomplishments being undermined, we wanted to diminish gender inequality in STEM. We chose to focus on one of the roots of this problem: misogynistic or sexist comments in both professional and social settings. ## What it does Thus, we built Miso, an AI that detects sexism and toxicity in text messages and warns the sender of their behaviour. It is then up to them whether they still want to send it or not, but mind that messages which are filtered as at least 60% sexist or 80% toxic will be sent up to admin within the communication channel. It can be integrated into Discord as a bot or used directly on our website. In a professional workspace, which Discord is increasingly being used as–especially for new startups–the consequences of toxic behaviour are even more serious. Every employee in the company will have a profile which tracks every time a message is flagged as sexist or toxic. This tool helps HR determine inappropriate behaviour, measure the severity of it, and keep records to help them mediate workplace conflicts. We hope to reduce misogyny within tech communities so that women feel more empowered and encouraged to join STEM! ## How we built it We made our own custom machine learning model on Cohere. We made a classify model which categorises text inputs into the labels True (misogynistic) and False (non-misogynistic). To train it, we combined various databases which took comments from social media sites including Reddit and Twitter, and organised them into 2 columns: message text and their associated label. In addition to our custom model, we also implemented the Cohere Toxicity Detection API into our program and checked messages against both models. Next, we developed a Discord Bot so that users can integrate this AI into their Discord Servers. We built it using Python, Discord API, and JSON. If someone sends a message which is determined to be 60% likely to be sexist or 80% likely to be toxic, then the bot would delete the message and send a warning text into the channel. To log the message history of the server, we used Estuary. Whenever a message would be sent in the discord, a new text file would be created and uploaded to Estuary to be backed up. This text file contains the message id, content and author. The file is uploaded by calling the Estuary API and making a request to upload. Once the file uploads, the content ID is saved and the file on our end is deleted. Estuary allows us to save message logs that are problematic without the burden of storage on our end. Our next problem was connecting our machine learning models to our front end website & Discord. The Discord was easy enough as both the machine learning model and the discord bot was coded in Python. We imported the model into our Discord bot and ran every message into both machine learning models to determine whether they were problematic or not. The website was more difficult because it was made in React.JS so we needed an external app to connect the front end display and back end models. We created an API endpoint in our model page using FastApi and created a server using Uvicorn. This allowed us to use the Fetch API to fetch the model’s prediction of the inputted sample message. As for our frontend, we developed a website to display Miso using React.JS, Javascript, HTML/CSS, and Figma. ## Challenges we ran into Some challenges we ran into were figuring out how to connect our frontend to our backend, incorporate estuary, train a custom machine learning model in Cohere, and handle certain libraries, such as JSON, for the first time. ## Accomplishments that we're proud of We’re proud of creating our own custom machine learning model! ## What we learned We learned a lot about machine learning and what Estuary is and how to use it! ## What's next for Miso We hope to be able to incorporate it into even more communication platforms such as Slack or Microsoft Teams.
losing
## Inspiration Music is a universal language, and we recognized Spotify wrapped to be one of the most anticipated times of the year. Realizing that people have an interest in learning about their own music taste, we created ***verses*** to not only allow people to quiz themselves on their musical interests, but also quiz their friends to see who knows them best. ## What it does A quiz that challenges you to answer questions about your Spotify listening habits, allowing you to share with friends and have them guess your top songs/artists by answering questions. Creates a leaderboard of your friends who have taken the quiz, ranking them by the scores they obtained on your quiz. ## How we built it We built the project using react.js, HTML, and CSS. We used the Spotify API to get data on the user's listening history, top songs, and top artists as well as enable the user to log into ***verses*** with their Spotify. JSON was used for user data persistence and Figma was used as the primary UX/UI design tool. ## Challenges we ran into Implementing the Spotify API was a challenge as we had no previous experience with it. We had to seek out mentors for help in order to get it working. Designing user-friendly UI was also a challenge. ## Accomplishments that we're proud of We took a while to get the backend working so only had a limited amount of time to work on the frontend, but managed to get it very close to our original Figma prototype. ## What we learned We learned more about implementing APIs and making mobile-friendly applications. ## What's next for verses So far, we have implemented ***verses*** with Spotify API. In the future, we hope to link it to more musical platforms such as Apple Music. We also hope to create a leaderboard for players' friends to see which one of their friends can answer the most questions about their music taste correctly.
## Inspiration Music has become a crucial part of people's lives, and they want customized playlists to fit their mood and surroundings. This is especially true for drivers who use music entertain themselves on their journey and to stay alert. Based off of personal experience and feedback from our peers, we realized that many drivers are dissatisfied with the repetitive selection of songs on the radio and also on the regular Spotify playlists. That's why we were inspired to create something that could tackle this problem in a creative manner. ## What It Does Music Map curates customized playlists based on factors such as time of day, weather, driving speed, and locale, creating a set of songs that fit the drive perfectly. The songs are selected from a variety of pre-existing Spotify playlists that match the users tastes and weighted based on the driving conditions to create a unique experience each time. This allows Music Map to introduce new music to the user while staying true to their own tastes. ## How we built it HTML/CSS, Node.js, Esri, Spotify, Google Maps APIs ## Challenges we ran into Spotify API was challenging to work with, especially authentication. Overlaying our own UI over the map was also a challenge. ## Accomplishments that we're proud of Learning a lot and having something to show for it The clean and aesthetic UI ## What we learned For the majority of the team, this was our first Hackathon and we learned how to work together well and distribute the workload under time pressure, playing to each of our strengths. We also learned a lot about the various APIs and how to fit different pieces of code together. ## What's next for Music Map We will be incorporating more factors into the curation of the playlists and gathering more data on the users' preferences.
## Inspiration We wanted to make a simple product that sharpens blurry images without a lot of code! This could be used as a preprocessing step for image recognition or a variety of other image processing tasks. It can also be used as a standalone product to enhance old images. ## What it does Our product takes blurry images and makes them more readable. It also improves IBM Watson's visual recognition functionality. See our powerpoint for more information! ## How we built it We used python3 and the IBM Watson library. ## Challenges we ran into Processing images takes a lot of time! ## Accomplishments that we're proud of Our algorithm improves Watson's capabilities by 10% or more! ## What we learned Sometimes, simple is better :) ## What's next for Pixelator We could incorporate our product into an optical character recognition system, or try to incorporate our system as a preprocessing step in a pipeline involving e.g. convolutional neural nets to get even greater accuracy with the cost of higher latency.
partial
## Inspiration The inspiration for this project came from my passion for decentralized technology. One particular niche of decentralization I am particularly fond of is NFT's and how they can become a great income stream for artists. With the theme of the hackathon being exploration and showing a picture of a rocket ship, it is no surprise that the idea of space came to mind. Looking into space photography, I found the [r/astrophotography](https://www.reddit.com/r/astrophotography/) subreddit that has a community of 2.6 million members. There, beautiful shots of space can be found, but they also require expensive equipment and precise editing. My idea for Astronofty is to turn these photographs into NFT's for the users to be able to sell as unique tokens on the platform while using Estuary as decentralized storage platform for the photos. ## What It Does You can mint/create NFT's of your astrophotography to sell to other users. ## How I Built It * Frontend: React * Transaction Pipeline: Solidity/MetaMask * Photo Storage: Estuary ## Challenges I Ran Into I wanted to be able to upload as many images as you want to a single NFT so figuring that out logistically, structurally and synchronously in React was a challenge. ## Accomplishments That We're Proud Of Deploying a fully functional all-in-one NFT marketplace. ## What I Learned I learned about using Solidity mappings and structs to store data on the blockchain and all the frontend/contract integrations needed to make an NFT marketplace work. ## What's Next for Astronofty A mechanism to keep track of highly sought after photographers.
## Inspiration Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use. ## What it does and how we built it The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well. ## Challenges we ran into This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem. ## Accomplishments that we're proud of Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device. ## What's next for PulseGrip * working to improve sensors, adding more games, seeing how we can help people We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here. ## Check it out <https://hacks.pulsegrip.design> <https://github.com/PulseGrip>
[Link to GitHub](https://github.com/Interlink-McHacks) ## Inspiration We realized that a major challenge for us in developing remotely (thank you COVID!) was that sharing things on localhost was very difficult and that it wasn't always possible to port forward or host on a remote server. For example, if you are behind an enterprise (or school) network it is most likely not possible to port forward and host something directly behind your public IPv4. This inspired us to create a tool that lets you pretend that you and your friends' computer are connected on one local network! (p.s. works great for locally-hosted Minecraft servers as well) ## What it does Allows a local TCP socket/port to be accessed remotely behind an authentication layer from any computer that has our client software installed. For example, if you have a MySQL database running **locally** on port 3306, our service lets a computer across the internet access that port. By leveraging our own WebSocket forwarding proxy and Wireguard, we are able to avoid the common pitfalls of port forwarding which include exposing your internal network to anyone and revealing your IP address. Furthermore, our client software works without elevated privileges and will tunnel through even the most restrictive firewalls (does not require UDP or **any ports other than HTTP/HTTPS open**) ## How we built it * Used Wireguard to set up encrypted between the source computer and control plane server and to enable easy packet routing * Translated TCP sockets into WebSockets so that the local TCP socket can be accessed remotely from any computer * Built a local daemon that automatically creates WebSockets on demand and manages Wireguard configuration * Configuration of ports and WebSockets done through web dashboard built in Next.js * Management API created using Node.js and Express ## Challenges we ran into * Working with Wireguard * Researching networking * Learning Next.js and React for some of our team members ## Accomplishments that we're proud of * Combining all the moving pieces together (TCP -> WebSockets -> Wireguard -> WebSockets -> TCP) ## What we learned * A lot about networking * Developing software to work on multiple operating systems ## What's next for Interlink * Being able to forward UDP ports as well * Bundling a local version of Wireguard into our application so that it's an all-in-one package, simplifying the process for non-technical users * Finishing our custom DNS service so that you don't need to remember the internal addresses and can just use the service name
winning
# Gait @ TreeHacks 2016 [![Join the chat at https://gitter.im/thepropterhoc/TreeHacks_2016](https://badges.gitter.im/thepropterhoc/TreeHacks_2016.svg)](https://gitter.im/thepropterhoc/TreeHacks_2016?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) Diagnosing walking disorders with accelerometers and machine learning **Based on the original work of Dr. Matt Smuck** ![Walking correctly](https://d30y9cdsu7xlg0.cloudfront.net/png/79275-200.png) Author : *Shelby Vanhooser* Mentor : *Dr. Matt Smuck* --- ### Goals ***Can we diagnose patient walking disorders?*** * Log data of walking behavior for a known distance through a smartphone * Using nothing but an accelerometer on the smartphone, characterize walking behaviors as *good* or *bad* (classification) * Collect enough meaningful data to distinguish between these two classes, and draw inferences about them --- ### Technologies * Wireless headphone triggering of sampling * Signal processing of collected data * Internal database for storing collection * Support Vector Machine (machine learning classification) -> Over the course of the weekend, I was able to test the logging abilities of the app by taking my own phone outside, placing it in my pocket after selecting the desired sampling frequency and distance I would be walking (verified by Google Maps), and triggering its logging using my wireless headphones. This way, I made sure I was not influencing any data collected by having abnormal movements be recorded as I placed it in my pocket. ****Main screen of app I designed**** ![Landing screen](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Screenshots/Screenshot_2.png) ****The logging in action**** ![The logging app in action](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Screenshots/Screenshot_1.png) -> This way, we can go into the field, collect data from walking, and log if this behavior is 'good' or 'bad' so we can tell the difference on new data! --- ### Data First, let us observe the time-domain samples recorded from the accelerometer: ![Raw signal recorded](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Time_Domain.png) It is immediately possible to see where my steps were! Very nice. Let's look at what the spectrums are like after we take the FFT... *Frequency Spectrums of good walking behavior* ![Good walking behavior frequency spectrum](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/images/good_animated.gif) *Frequency spectrums of bad walking behavior* ![Bad walking behavior frequency spectrum](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/images/bad_animated.gif) 19 'correct' walking samples and 5 'incorrect' samples were collected around the grounds of Stanford across reasonably flat ground with no obstacle interference. ***Let's now take these spectrums and use them as features for a machine learning classification problem*** -> Additionally, I ran numerous simulations to see what kernel in SVM would give the best output prediction accuracy: **How many features do we need to get good prediction ability?** *Linear kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Linear_SVM_2000_Sample_FFT.png) **Look at that characterization for so few features!** Moving right along... *Quadratic kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Quadratic_SVM_2000_Sample_FFT.png) Not as good as linear. What about cubic? *Cubic kernel* ![ROC-like characterization](https://raw.githubusercontent.com/thepropterhoc/TreeHacks_2016/master/Collected_Data/Cubic_SVM_2000_Sample_FFT.png) Conclusion: We can get 100% cross-validated accuracy with... ***A linear kernel*** Good to know. We can therefore predict on incoming patient data if their gait is problematic! --- ### Results * From analysis of the data, its structure seems to be well-defined at several key points in the spectrum. That is, after feature selection was run on the collected samples, 11 frequencies were identified as dominating its behavior: **[0, 18, 53, 67, 1000, 1018, 1053, 2037, 2051, 2052, 2069]** ***Note*** : it is curious that index 0 has been selected here, implying that the overall angle of an accelerometer on the body while walking has influence over the observed 'correctness' of gait * From these initial results it is clear we *can* characterize 'correctness' of walking behavior using a smartphone application! * In the future, it would seem very reasonable to have a patient download an application such as this, and, using a set of known walking types from measurements taken in the field, be able to diagnose and report to an unknown patient if they have a disorder in gait. --- ### Acknowledgments * **Special thanks to Dr. Matt Smuck for his original work and aid in pushing this project in the correct direction** * **Special thanks to [Realm](https://realm.io) for their amazing database software** * **Special thanks to [JP Simard](https://cocoapods.org/?q=volume%20button) for his amazing code to detect volume changes for triggering this application** * **Special thanks to everyone who developed [Libsvm](https://www.csie.ntu.edu.tw/%7Ecjlin/libsvm/) and for writing it in C so I could compile it in iOS**
## What inspired you? The inspiration behind this project came from my grandmother, who has struggled with poor vision for years. Growing up, I witnessed firsthand how her limited sight made daily tasks and walking in unfamiliar environments increasingly difficult for her. I remember one specific instance when she tripped on an uneven curb outside the grocery store. Though she wasn’t hurt, the fall shook her confidence, and she became hesitant to go on walks or run errands by herself. This incident helped spark the idea of creating something that could help people like her feel safer and more secure while navigating their surroundings. I wanted to develop a solution that would give visually impaired individuals not just mobility but confidence. The goal became clear: to create a product that could intuitively guide users through their environment, detecting obstacles like curbs, steps, and uneven terrain, and providing feedback they could easily understand. By incorporating haptic feedback, pressure-based sensors, and infrared technology, this system is designed to give users more control and awareness over their movements, helping them move through the world with greater independence and assurance. My hope is that this technology can empower people like my grandmother to reclaim their confidence and enjoy everyday activities without fear. ## What it does This project is a smart shoe system designed to help visually impaired individuals safely navigate their surroundings by detecting obstacles and terrain changes. It uses infrared sensors located on both the front and bottom of the shoe to detect the distance to obstacles like curbs and stairs. When the user approaches an obstacle, the system provides real-time feedback through 5 servos. 3 servos are responsible for haptic feedback related to the distance from the ground, and distance in front of them, while the remaining servos are related to help guiding the user through navigation. The infrared sensors detect how far the foot is off the ground, and the servos respond accordingly. The vibrational motors, labeled 1a and 2a, are used when the distance exceeds 6 inches, delivering pulsating signals to inform the user of upcoming terrain changes. This real-time feedback ensures users can sense potential dangers and adjust their steps to prevent falls or missteps. Additionally the user would connect to the shoe based off of bluetooth. The shoe system operates using three key zones of detection: the walking range (0-6 inches), the far walking range (6-12 inches), and the danger zone (12+ inches). In the walking range, the haptic feedback is minimal but precise, giving users gentle vibrations when the shoe detects small changes, such as a flat surface or minor elevation shifts. As the shoe moves into the far walking range (6-12 inches), where curbs or stairs may appear, the intensity of the feedback increases, and the vibrational motors start to pulse more frequently. This alert serves as a warning that the user is approaching a significant elevation change. When the distance exceeds 12 inches—the danger zone—the vibrational motors deliver intense, rapid feedback to indicate a drop-off or large obstacle, ensuring the user knows to take caution and adjust their step. These zones are carefully mapped to provide a seamless understanding of the terrain without overwhelming the user. The system also integrates seamlessly with a mobile app, offering GPS-based navigation via four directional haptic feedback sensors that guide the user forward, backward, left, or right. Users can set their route through voice commands, unfortunately we had trouble integrating Deepgram AI, which would assist by understanding speech patterns, accents, and multiple languages, making it accessible to people who are impaired lingually. Additionally we had trouble integrating Skylo, which the idea would be to serve areas where Wi-Fi is unavailable, or connection unstable, the system automatically switches to Skylo via their Type1SC circuit board and antenna, a satellite backup technology, to ensure constant connectivity. Skylo sends out GPS updates every 1-2 minutes, preventing the shoe from losing its route data. If the user strays off course, Skylo triggers immediate rerouting instructions through google map’s api in the app which we did set up in our app, ensuring that they are safely guided back on track. This combination of sensor-driven feedback, haptic alerts, and robust satellite connectivity guarantees that visually impaired users can move confidently through diverse environments. ## How we built it We built this project using a combination of hardware components and software integrations. To start, we used infrared sensors placed at the front and bottom of the shoe to detect distance and obstacles. We incorporated five servos into the design: three for haptic feedback based on distance sensing 3 and two for GPS-related feedback. Additionally, we used vibrational motors (1a and 2a) to provide intense feedback when larger drops or obstacles were detected. The app we developed integrates the Google Maps API for route setting and navigation. To ensure connectivity in areas with limited Wi-Fi, we integrated Skylo’s Type 1SC satellite hardware, allowing for constant GPS data transmission even in remote areas. For the physical prototype, we constructed a 3D model of a shoe out of cardboard. Attached to this model are two 5000 milliamp-hour batteries, providing a total of 10000 mAh to power the system. We used an ESP32 microcontroller to manage the various inputs and outputs, along with a power distribution board to efficiently allocate power to the servos, sensors, and vibrational motors. All components were securely attached to the cardboard shoe prototype to create a functional model for testing. ## Challenges we ran into One of the main challenges we encountered was working with the Skylo Type 1SC hardware. While the technology itself was impressive, the PDF documentation and schematics were quite advanced, requiring us to dive deeper into understanding the technical details. We successfully established communication between the Arduino and the Type 1SC circuit but faced difficulties in receiving a response back from the modem, which required further troubleshooting. Additionally, distinguishing between the different components on the circuit, such as data pins and shorting components, proved challenging, as the labeling was intricate and required careful attention. These hurdles allowed us to refine our skills in circuit analysis and deepen our knowledge of satellite communication systems. On the software side, we had to address several technical challenges. Matching the correct Java version for our app development was more complex than expected, as version discrepancies affected performance. We also encountered difficulties creating a Bluetooth hotspot that could seamlessly integrate with the Android UI for smooth user interaction. On the hardware end, ensuring reliable connections was another challenge; we found that some of our solder joints for the pins weren’t as stable as needed, leading to occasional issues with connectivity. Through persistent testing and adjusting our approaches, we were able to resolve most of these challenges while gaining valuable experience in both hardware and software integration. ## Accomplishments that we're proud of One of the accomplishments we’re most proud of is successfully setting up Skylo services and establishing satellite connectivity, allowing the system to access LTE data in areas with low or no Wi-Fi. This was a key component of the project, and getting the hardware to communicate with satellites smoothly was a significant milestone. Despite the initial challenges with understanding the complex schematics, we were able to wire the Arduino to the Type 1SC board correctly, ensuring that the system could relay GPS data and maintain consistent communication. The experience gave us a deeper appreciation for satellite technology and its role in enhancing connectivity for projects like ours. Additionally, we’re proud of how well the array of sensors was set up and how all the hardware components functioned together. Each sensor, whether for terrain detection or obstacle awareness, worked seamlessly with the servos and haptic feedback system, resulting in better-than-expected performance. The responsiveness of the hardware components was more precise and reliable than we had originally anticipated, which demonstrated the strength of our design and implementation. This level of integration and functionality validates our approach and gives us confidence in the potential impact this project can have for the visually impaired community. ## What we learned Throughout this project, we gained a wide range of new skills that helped bring the system to life. One of our team members learned to solder, which was essential for securing the hardware components and making reliable connections. We also expanded our knowledge in React system programming, which allowed us to create the necessary interactions and controls for the app. Additionally, learning to use Flutter enabled us to develop a smooth and responsive mobile interface that integrates with the hardware components. On the hardware side, we became much more familiar with the ESP32 microcontroller, particularly its Bluetooth connectivity functions, which were crucial for communication between the shoe and the mobile app. We also had the opportunity to dive deep into working with the Type 1SC board, becoming comfortable with its functionality and satellite communication features. These new skills not only helped us solve the challenges within the project but also gave us valuable experience for future work in both hardware and software integration. ## What's next for PulseWalk Next for PulseWalk, we plan to enhance the system's capabilities by refining the software to provide even more precise feedback and improve user experience. We aim to integrate additional features, such as obstacle detection for more complex terrains and improved GPS accuracy with real-time rerouting. Expanding the app’s functionality to include more languages and customization options using Deepgram AI will ensure greater accessibility for a diverse range of users. Additionally, we’re looking into optimizing battery efficiency and exploring more durable materials for the shoe design, moving beyond the cardboard prototype. Ultimately, we envision PulseWalk evolving into a fully commercialized product that offers a seamless, dependable mobility aid for the visually impaired which partners with shoe brands to bring a minimalist approach to the brand and make it look less like a medical device and more like an everyday product.
## Inspiration With all of us playing different sports, we know that the most important part of movement starts at the base: the feet. For athletes and those with movement impairments alike, having more data on how the feet move (namely how fast they move) can give valuable insights into enhancing movement. We developed Smart-kicks to combine the power of embedded smart shoes and of computer-vision tracking systems. Athletes can use it to improve their training, those who get injured can use it for rehab, or people with disabilities can use it for QOL improvements. ## What it does Smart-kicks is a smart shoe system to gather data about foot movement. Our prototype currently tracks foot speed, and clips onto a user's laces to instantly make a shoe "smart". The shoe connects to the internet via the WiFi ESP32 chip, and posts the foot speed data to our web app, which inserts it into CockroachDB Serverless in real time. Separately, a MediaPipe pose-tracking application observes user movement and classifies it into different actions. For example, for a soccer player who is practicing dribbling drills and also shooting into the net, our app can classify what they are doing at each time ("dribble" or "kick") and send that data to CockroachDB. Finally, an Express app serves a React user dashboard with graphs to summarize all of the data. Since we know both foot speed and the classified poses at each time, we can associate the two together. Our soccer player can know the foot speed for dribbling separately from for kicking, allowing them to focus on just one to improve. ## How we built it The smart shoe used an Arduino Uno to calculate foot velocity in real-time, then transmitted the data to an ESP32 chip via serial communication (UART protocol). Then, the ESP32 POSTs the data to an endpoint on our deployed Express app, which stores it in CockroachDB. To actually calculate velocity, we have a 3-axis gyroscope and accelerometer (MPU6050) attached to the Arduino. We use the gyroscope to remove the effect of gravity from the acceleration by applying a rotation matrix, then we integrate the acceleration to get the velocity. All components reside in a laser-cut board that can be clipped onto a user's laces. To classify user poses, we used the MediaPipe library's pose library to get coordinates of body points, then did an analysis of joint positions relative to each other. The web app itself is built with Express and React, and was hosted on GCP App Engine in order to have the API endpoint available to the shoe and MediaPipe application. ## Challenges and Accomplishments Our team came across some extremely frustrating bugs this weekend. One hardware issue was that we initially tried connecting the accelerometer directly to the ESP32 chip. However, the chip didn't have enough power to supply it and experienced a brownout error. Instead, we connected the sensor to the Arduino and transmitted the data to ESP32 via UART. Another was that the accelerometer also included a constant acceleration due to gravity, and to make matters worse, that was split between three axes when rotated. In order to remove it, we needed to apply a rotation matrix. We experienced equal frustration on the software side of things. It turns out there's no MediaPipe build for M1 Macbooks, which required some of us to download x86 Python through Rosetta to run it. We also occasionally get a 404 on the frontend when accessing the compiled React app (which, scarily, we still don't know the reason for). However, we're incredibly proud of what we've made and learned! All components of our project ended up working successfully, which we're extremely happy with, as this was our first hardware hack and our first time using MediaPipe. ## What we learned Our biggest takeaway is definitely that in-person hackathons are strictly superior to online ones. We also learned so much about hardware in general, serial communication, IoT, MediaPipe, and more. By applying all of these in a project together, we learned about the power that they yield and how they can be combined in many creative use cases. ## What's next for Smart-Kicks For the future, we'd like to first add more sensors. We're thinking of a GPS module, a heart rate sensor, and a force sensor that allows us to measure how much force our user puts into the ground or kicks. Of course, if continuing with the project, we'd also want it to be integrated into the shoe rather than attached.
partial
## Inspiration For a manager of a small business -- be it a store, restaurant, gym, or even a movie theater -- improving the customer experience and understand what's going on is tremendously important. Having access to analytics of when people are entering the building, what areas they're spending time at, and what crowds and lines are forming can provide managers with incredibly useful insights -- from identifying parts of the building layout that are poorly designed and causing congestion, to figuring out that certain table setups or shop items are particularly engaging, to having a better idea of what's going on in their business and being able to make data-driven decisions about how to improve. ## What it does Given a live video feed from an overhead camera, Crowd Insights’ AI algorithms detect human heads within the video and use this positional data to identify lines and clusters of people and create heatmaps. The small business owner can then examine this data to learn about human traffic flow within their store over a specified period of time. There are a variety of use cases for this data: congestion tracking, popular hotspots in store, long lines, etc. By analyzing these trends over time, small business owners can make informed decisions on how to improve their business to optimize the physical interaction of customers with the store. For example, if they notice that lots of people tend to group up around a certain product, then they can know to place that product near the back of the store to prevent crowding around the store entrance. Other use cases for this technology could include event management. Event organizers such as the TreeHacks team can use this technology to monitor the congestion within each room and help disperse people from highly crowded rooms to open spaces for work. They can monitor lines, ie for food or networking, and figure out novel ways to deal with long lines and heavy foot traffic. ## How we built it We built the theory and data science toolkits, machine learning model, frontend, and backend separately. For the machine learning, we used the Pytorch FCHD fully convolutional head detector, running on a Google Cloud VM. Afterwards, we passed the list of heads to the graph theory library that we built, which constructed the Minimum Spanning Tree through the graph, removed edges that were too long, and performed elliptical fits to determine whether a group of points was a line or a cluster. We also aggregated human location data over time to create a heatmap of the environment to see which places are interacted with the most. Firebase is used to communicate between the head detector and the computer (like a Raspberry Pi), which sends webcam feed data. Finally we have a web server using ReactJS that displays the results. ## Challenges we ran into One main issue was finding a vision model that could provide dense data for human position in a camera frame. Most models tend to do decent at closer distances but as we try to monitor areas that are >15 feet away from a camera, the precision becomes an issue. Due to the fact that we needed this sort of density in our data, we had to work through testing many model architectures and fusion techniques to yield the best results. We also had a lot of trouble rendering the line/cluster data from Firebase in a real-time graph on the website. This was tough because no member had extensive experience with realtime updating and with push/pull requests between Firebase and the web app. To solve this, we worked together to break the problem down into two parts—that of collecting and parsing data from Firebase, and that of displaying the data in a dynamic graph. Lastly, this was our first time incorporating a big chunk of frontend programming in our application. Our experience in JavaScript, HTML, and Firebase was limited. Thus, it took us a long time to implement the syntax of the languages from scratch. However, this also made this project really impactful as it provided us with an exceptional learning opportunity. ## Accomplishments that we’re proud of We implemented simple but effective algorithms for recognizing clusters of crowds and lines. We used minimum spanning trees and fitting ellipses to identify clusters, then took clusters with particularly elongated ellipses and fit them with best fit lines. We developed a decision tree that applied knowledge from all branches of computer science - from theory to machine learning and software engineering - together in a product that became more than the sum of its parts. The final web product took tens of hours to complete, and we’re confident that we were able to get it right. ## What we learned A lot of new frontend learning and creating algorithms ReactJS, ChartJS, CanvasJS, Plotly, firebase ML Head and Body Detection Algorithms Kruskal’s Minimum Spanning Tree, Automatic K-Means Clustering, Depth-First Search, Firebase - Realtime graphs, how to upload data from Jetson to Firebase to web app Even though the project was divided into a frontend and backend portion, all members were able to understand the implementation on both sides. Throughout the implementation, we worked as a unified team, especially when we ran into roadblocks. The core takeaway from this project is our improved understanding of realtime databases, machine learning models, and frontend program structure. ## What's next for Crowd Insights AI One big next step would be applying mapping techniques to create a 3D map of the shop, then localize detected crowds in that 3D map. It would allow the business owner to analyze exactly which shelves or tables are becoming crowded. Furthermore, performing spatial transforms on the angled camera footage would allow us to track 3D from a 2D space. We'd also want to apply optical flow and motion tracking to see how people are moving through the space and what slows them down.
## Inspiration The inspiration behind LeafHack stems from a shared passion for sustainability and a desire to empower individuals to take control of their food sources. Witnessing the rising grocery costs and the environmental impact of conventional agriculture, we were motivated to create a solution that not only addresses these issues but also lowers the barriers to home gardening, making it accessible to everyone. ## What it does Our team introduces "LeafHack" an application that leverages computer vision to detect the health of vegetables and plants. The application provides real-time feedback on plant health, allowing homeowners to intervene promptly and nurture a thriving garden. Additionally, the images uploaded can be stored within a database custom to the user. Beyond disease detection, LeafHack is designed to be a user-friendly companion, offering personalized tips and fostering a community of like-minded individuals passionate about sustainable living ## How we built it LeafHack was built using a combination of cutting-edge technologies. The core of our solution lies in the custom computer vision algorithm, ResNet9, that analyzes images of plants to identify diseases accurately. We utilized machine learning to train the model on an extensive dataset of plant diseases, ensuring robust and reliable detection. The database and backend were built using Django and Sqlite. The user interface was developed with a focus on simplicity and accessibility, utilizing next.js, making it easy for users with varying levels of gardening expertise ## Challenges we ran into We encountered several challenges that tested our skills and determination. Fine-tuning the machine learning model to achieve high accuracy in disease detection posed a significant hurdle as there was a huge time constraint. Additionally, integrating the backend and front end required careful consideration. The image upload was a major hurdle as there were multiple issues with downloading and opening the image to predict with. Overcoming these challenges involved collaboration, creative problem-solving, and continuous iteration to refine our solution. ## Accomplishments that we're proud of We are proud to have created a solution that not only addresses the immediate concerns of rising grocery costs and environmental impact but also significantly reduces the barriers to home gardening. Achieving a high level of accuracy in disease detection, creating an intuitive user interface, and fostering a sense of community around sustainable living are accomplishments that resonate deeply with our mission. ## What we learned Throughout the development of LeafHack, we learned the importance of interdisciplinary collaboration. Bringing together our skills, we learned and expanded our knowledge in computer vision, machine learning, and user experience design to create a holistic solution. We also gained insights into the challenges individuals face when starting their gardens, shaping our approach towards inclusivity and education in the gardening process. ## What's next for LeafHack We plan to expand LeafHack's capabilities by incorporating more plant species and diseases into our database. Collaborating with agricultural experts and organizations, we aim to enhance the application's recommendations for personalized gardening care.
## Inspiration A majority of critical data is streaming data. One such example is the performance of factory machines over time. We wanted to devise a method in order to analyze these machines for potential performance errors and to visualize their performance over time. This would tremendously help cut business costs, as specialized technicians would be able to prevent these performance errors before they occur, in order to save the company resources. ## What it does Our app is a platform for predictive analysis on real-time machine performance data from Black & Decker. Hence, its primary purpose is to anticipate whether a machine would fail based on the given data. A secondary objective was to build an IoT dashboard to visualize results of exploratory data analysis (e.g. summary statistics). ## How we built it In order to perform predictive analysis, we implemented an anomaly detection algorithm that takes the temporal nature of the data into consideration. We implemented the system on top of the Flask framework, with a React front-end. In addition, we used Firebase as a central data repository and hosted the app on Google's App Engine. We also used the Twilio API in order to implement text-message alerts. ## Challenges we ran into The dataset we were given to work with was very small and not representative of the entire problem domain. Hence, we had to design a custom algorithm in order to generate sufficient data. Also, none of us had experience working with time series models or anomaly detection, so we had to learn a lot along the way. ## Accomplishments that we're proud of We're most proud of the fact that we were able to collaborate independently on different features of the system (e.g. algorithms, back-end, front-end) and still managed to fuse everything together into the final product. Also, we're proud that we were able to tackle such a challenging problem without any obvious solutions within such a short timespan. ## What we learned We learned a lot about the IoT framework and predictive analysis on streaming data (e.g. online learning, time series, etc.). In addition, we learned about the importance of clean data for both visualization and predictive analysis. Last but not least, we learned that teamwork and collaboration are major keys to tackling challenging and unfamiliar problems. ## What's next for IoT Streaming and Analytics Dashboard To infinity and beyond!
winning
## Inspiration Insightful Meetings is inspired by the numerous virtual meetings transpired by current social distancing measures and the lack of attention of participants during meetings (missing information or forgetting to take notes). With the constant and increasing usage of video communication services, like Zoom or Google Meet, our team was inspired to further leverage functionality and experience by providing insightful meeting analytics. ## What it does Insightful Meetings is a web application that generates analytics from audio content of video meetings. These analytics include key phrases, named/linked recognition of entities, and sentiment analysis. By providing participants these insightful analytics via a clear and easy-to-use application, they will be able to recall any missed or forgotten information or perform an analysis of key points. This contributes to a deeper understanding of meeting topics/information. ## How we built it Insightful Meetings is built with a backend consisting of python, Microsoft Azure Text Analytics API, and Google Cloud Speech-to-Text API. The frontend (web app) is built using React.js, Flask for web services, HTML, and CSS. ## Challenges we ran into The challenges we encountered included making use of automatic text summarizers and not having enough experience with React.js. ## Accomplishments that we're proud of Firstly, we are proud of implementing an application that is functional and serves a useful purpose. We met our desire to create something useful, especially helping to alleviate pressure and assisting in work during quarantine/remote settings. Secondly, we are proud of the integration of our backend (APIs) with our frontend (React.js). ## What we learned Throughout the creation of this project, we learned different technologies like Microsoft Azure, React.js, and gained experience working with different APIs. ## What's next for Meeting Insights Insightful Meetings will further enhance analytics and provide suggestions based on the analysis of analytics. In the future, implementing a desktop program will automatically process every new meeting on any video/audio communication service and display insights and suggestions on a dashboard.
## Inspiration One of our favorite content creators recently had their content demonetized by YouTube. These creators struggle day and night to create the best content for their fans, and depend on these streams of revenue to continue creating content. However, in some scenarios such as these, creators become disenfranchised by existing platforms. We're trying to fix that by letting users decide what creators' content is worth by leaving sponsorship up for auction to a limited number of people, so that creators can interact more closely with their sponsors as well as have a more reliable revenue stream to continue creating content for their fans. ## What it does We've developed a social media app that creates a marketplace for creators and their fans. Users can post videos and bid for sponsorship of their favorite creators for access to exclusive content. Creators specify a limited number of spots for sponsorship and the market decides how much the sponsorships are worth (and thus how much content creators themselves make). Such a system results in purer motivations, fairer compensations, and much better content. ## How we built it We built our app using React Native front-end with Node.js server that interacts with a MongoDB database. Additionally, we store and stream videos using the eluv.io blockchain platform. ## What's next for choco 1. utilize smart contracts to manage user access levels 2. implement weekly auction for sponsorships 3. improve current interface in terms of user experience 4. inject video recommendation engine
## Inspiration The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation. A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths. As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback. ## Purpose Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve. While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events. After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback. ## Tech Stack We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/). [Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre. Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices. To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1). ## Challenges It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input. We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs. We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary. ## Accomplishments We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well. Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely. ## What we learned Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with. We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis. ## Future of Speech Master The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable. Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust. In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
losing
## A cute game made in Pygame. Catch the apples to save up for hibernation! ## Apples fall catch in basket using A/S keys to move ## Python, using the pygame libraries ## We hate GIMP ## It works!!!!! ## Coordinating people is more difficult than we thought ## What's next for Hibernation Frenzy. Difficulties, music, sound effects and animations
## Inspiration The project draws its inspiration from the enchanting world of Harry Potter, specifically the living paintings that add a magical touch to the walls of Hogwarts. This sparked the idea to transform mundane home assistants into something more interactive and lively. The goal was to make everyday interactions with technology not just functional but also entertaining, making our digital companions feel like a more integral and responsive part of our homes. ## What it does The essence of this project is to bring to life a collection of avatars, each with their unique theme, designed to either automate various household chores or provide a warm greeting. These avatars are not mere static figures; instead, they embody the concept of interactive assistants that can adapt and respond to the homeowner's presence and commands, making everyday tasks more engaging and enjoyable. ## How we built it The creation of this project involved the integration of several technologies and platforms. The graphical user interface was developed using Pygame, a choice that, while challenging due to our initial unfamiliarity, offered the flexibility we needed. Raspberry Pi devices were employed both for facial-detection and as screens to display our themed avatars. Laptops served a similar purpose, enhancing the project's accessibility and integration into the home environment. Communication with the motor and light was facilitated through Arduino boards, with serial communication bridging the gap between the Arduinos and Raspberry Pi's. A central server, established using Flask, played a pivotal role in orchestrating the interactions between the different components, ensuring a seamless experience. ## Challenges we ran into One of the primary hurdles we encountered was our lack of experience with Pygame. Venturing into uncharted territory was daunting, yet it was a deliberate choice to push the boundaries of our technical skills and explore new possibilities. This challenge was a testament to our commitment to innovation and our willingness to learn and adapt in the face of difficulties. ## Accomplishments that we're proud of Among the highlights of this project are the themes and the aesthetic appeal of our avatars. Crafting these characters with attention to detail and a keen sense of design was not only a creative endeavour but also a technical achievement. We are immensely proud of the personality and life we were able to infuse into each avatar, making them not just tools but companions within the home. ## What we learned This project was a profound learning experience, deepening our understanding of Pygame and familiarizing us with Flask for server management. We navigated the complexities of serial communication and message passing between various scripts and hardware components, gaining insights into the intricate dance of coordination required to bring our vision to life. ## What's next for home buddies Looking ahead, we are excited about the potential of leveraging Large Language Models (LLMs) to enhance the capabilities of our avatars. Our ambition is to evolve these Home Buddies into more general-purpose assistants, capable of undertaking a wider array of tasks and interactions. This next step will not only expand their utility but also bring us closer to a future where our homes are not just smart but also intuitively responsive and genuinely interactive.
## Inspiration Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant. ## How we built it We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes. ## Challenges we ran into We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls. ## Accomplishments that we're proud of Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision. ## What we learned During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork. ## What's next for DishPlay We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food!
losing
## Inspiration We decided to create Squiz in desire to address the struggles students face in revision, such as difficulty in digesting extensive study materials, and lack of opportunities to test for their understanding on the course topics. Recognizing the potential of ChatGPT, we aimed at creating a web app that simplifies complex course materials and enhances student’s revision process. ## What it does Squiz revolutionizes the revision process by converting dense study materials into interactive quizzes. Using ChatGPT, it intelligently generates personalized multiple-choice questions, making revision more engaging and effective. The app's customization options allow users to choose the difficulty levels, number of questions and save the quiz for future revision purposes. By transforming traditional study sessions into interactive quizzes, Squiz not only saves time but also enhances active learning, improving retention for a more efficient and enjoyable revision experience. ## How we built it This project boasts a captivating mobile-friendly web app, with React.js at the forefront for seamlessly transitioning pages and incorporating advanced features. CSS takes center stage to elevate the user experience with captivating animations and transitions. On the backend, Node.js and Express.js efficiently manages requests while seamlessly integrating with the powerful OpenAI API. Collaboration is done through the extensive use of Git among developers, ensuring a smooth and synchronized workflow within the GitHub repository. The interface was designed with cute graphics, including a squid mascot, and vibrant colors to offer an engaging user experience and thus an interesting learning process. ## Challenges we ran into Our main struggle in the hackathon was figuring out how to make a button work. This button lets users upload PDFs, and we convert the text into a special file that OpenAI uses to create quiz questions. We relied on the pdfjs-dist library for this, but its documentation was incomplete, making things tough. Despite the challenges, we tackled it by experimenting step by step and trying different ways to fix issues. Eventually, we got it working which is absolutely rewarding. Another challenge was the tight time limit of this hackathon. This constraint underscored the importance of efficient collaboration, pushing us to divide our work wisely, optimize processes and make collective decisions swiftly. Through coordinated efforts, we overcame this challenge, highlighting the importance of teamwork and adaptability in project execution. ## Accomplishments that we're proud of 1. We use the useContext hook for the first time. 2. We've developed more than 90% of the Hi-Fi prototype designed by the UX designer in 24 hours! 3. We've designed an original cartoon squid logo, which established brand personality and consistency. ## What we learned What you learned This hackathon experience taught us the importance of adaptability and innovation in educational technology. We learned to harness the power of natural language processing to cater to diverse learning styles. Building this tool deepened our understanding of AI's potential to enhance the educational experience. Working on Squiz also enhanced our teamwork skills, such as communication and collaborative problem-solving skills. We discovered that effective teamwork is as crucial as technical expertise in delivering successful projects. ## What's next for Squiz We would like to further developed the History function of the app, which allows users to save the quizzes that they have done.
## Inspiration The inspiration for this project stems from the well-established effectiveness of focusing on one task at a time, as opposed to multitasking. In today's era of online learning, students often find themselves navigating through various sources like lectures, articles, and notes, all while striving to absorb information effectively. Juggling these resources can lead to inefficiency, reduced retention, and increased distraction. To address this challenge, our platform consolidates these diverse learning materials into one accessible space. ## What it does A seamless learning experience where you can upload and read PDFs while having instant access to a chatbot for quick clarifications, a built-in YouTube player for supplementary explanations, and a Google Search integration for in-depth research, all in one platform. But that's not all - with a click of a button, effortlessly create and sync notes to your Notion account for organized, accessible study materials. It's designed to be the ultimate tool for efficient, personalized learning. ## How we built it Our project is a culmination of diverse programming languages and frameworks. We employed HTML, CSS, and JavaScript for the frontend, while leveraging Node.js for the backend. Python played a pivotal role in extracting data from PDFs. In addition, we integrated APIs from Google, YouTube, Notion, and ChatGPT, weaving together a dynamic and comprehensive learning platform ## Challenges we ran into None of us were experienced in frontend frameworks. It took a lot of time to align various divs and also struggled working with data (from fetching it to using it to display on the frontend). Also our 4th teammate couldn't be present so we were left with the challenge of working as a 3 person team. ## Accomplishments that we're proud of We take immense pride in not only completing this project, but also in realizing the results we envisioned from the outset. Despite limited frontend experience, we've managed to create a user-friendly interface that integrates all features successfully. ## What we learned We gained valuable experience in full-stack web app development, along with honing our skills in collaborative teamwork. We learnt a lot about using APIs. Also a lot of prompt engineering was required to get the desired output from the chatgpt apu ## What's next for Study Flash In the future, we envision expanding our platform by incorporating additional supplementary resources, with a laser focus on a specific subject matter
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
losing
## Inspiration Enabling Accessible Transportation for Those with Disabilities AccessRide is a cutting-edge website created to transform the transportation experience for those with impairments. We want to offer a welcoming, trustworthy, and accommodating ride-hailing service that is suited to the particular requirements of people with mobility disabilities since we are aware of the special obstacles they encounter. ## What it does Our goal is to close the accessibility gap in the transportation industry and guarantee that everyone has access to safe and practical travel alternatives. We link passengers with disabilities to skilled, sympathetic drivers who have been educated to offer specialised assistance and fulfill their particular needs using the AccessRide app. Accessibility:- The app focuses on ensuring accessibility for passengers with disabilities by offering vehicles equipped with wheelchair ramps or lifts, spacious interiors, and other necessary accessibility features. Specialized Drivers:- The app recruits drivers who are trained to provide assistance and support to passengers with disabilities. These drivers are knowledgeable about accessibility requirements and are committed to delivering a comfortable experience. Customized Preferences:- Passengers can specify their particular needs and preferences within the app, such as requiring a wheelchair-accessible vehicle, additional time for boarding and alighting, or any specific assistance required during the ride. Real-time Tracking:- Passengers can track the location of their assigned vehicle in real-time, providing peace of mind and ensuring they are prepared for pick-up. Safety Measures:- The app prioritizes passenger safety by conducting driver background checks, ensuring proper vehicle maintenance, and implementing safety protocols to enhance the overall travel experience. Seamless Payment:- The app offers convenient and secure payment options, allowing passengers to complete their transactions electronically, reducing the need for physical cash handling ## How we built it We built it using django, postgreSQL and Jupyter Notebook for driver selection ## Challenges we ran into Ultimately, the business impact of AccessRide stems from its ability to provide a valuable and inclusive service to people with disabilities. By prioritizing their needs and ensuring a comfortable and reliable transportation experience, the app can drive customer loyalty, attract new users, and make a positive social impact while growing as a successful business. To maintain quality service, AccessRide includes a feedback and rating system. This allows passengers to provide feedback on their experience and rate drivers based on their level of assistance, vehicle accessibility, and overall service quality. It was a challenging part in this event. ## Accomplishments that we're proud of We are proud that we completed our project. We look forward to develop more projects. ## What we learned We learned about the concepts of django and postgreSQL. We also learnt many algorithms in machine learning and implemented it as well. ## What's next for Accessride-Comfortable ride for all abilities In conclusion, AccessRide is an innovative and groundbreaking project that aims to transform the transportation experience for people with disabilities. By focusing on accessibility, specialized driver training, and a machine learning algorithm, the app sets itself apart from traditional ride-hailing services. It creates a unique platform that addresses the specific needs of passengers with disabilities and ensures a comfortable, reliable, and inclusive transportation experience. ## Your Comfort, Our Priority "Ride with Ease, Ride with Comfort“
## Off The Grid Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app # Inspiration Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better. # Technology Used Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access. # Challenges Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas # Team Members Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project!
## Inspiration What is the point of career fairs if recruiters tell potential candidates to just apply online, often with no response back? Recruiters are sick of the old-fashioned system of paper-copy resumes, in which they must manually scan each application, and individually enter data into their recruitment database (not to mention those long portfolio URLs!). Online application lack the personal interaction with passionate candidates that cannot be encapsulated on a piece of paper. The goal of the project was to simplify and speed up the process between meeting a potential intern/employee and getting that first interview offer to them. ## What it does Candidates are able to enter information about themselves, such as a picture, resume URLs, LinkedIn URLs, and their top three projects. From there, the information is bundled into a QR Code which recruiters can quickly and easily scan. Recruiters can also add additional comments such as "This student is looking for a Fall 2016 internship" to each scanned candidate. Recruiters can then review all the Candidates scanned and view them in a list. ## How we built it The APIs for saving candidate information, authentication, authorization, are all built using Node.js. The front end of the WebApp portion of the stack was done using html5, jQuery, jade, and css. The Android app uses the REST APIs from the Nodejs web server. ## Challenges we ran into * Finding points of integration between the Android app and the Webstack * Dependencies from the Android app on the REST APIs ## Accomplishments that we're proud of * Implementation of QR code scanning and recognition on both Android and Webapp * POST and GET calls successful from Android and Webapp * LinkedIn integration on Android * Teamwork ## What we learned * The current recruitment process is inefficient and discouraging for both recruiters and candidates * This is the first time many of us has used frameworks such as Node.js and jade * Integrating a webApp with an Android app to create a multi-purpose platform * QR Codes are awesome! ## What's next for Just Choose Me Future iterations would include: 1. Candidate List Search, Sort, and Download functionalities - A recruiter should be able to search his/her candidates that they have scanned, sort by a particular parameter (eg. Date), and download the candidate list in his/her preferred format to continue the recruitment process (eg. Download candidate information as JSON file to be inputted to their applicant tracking system) 2. Organization Account – A company should be able to create a Just Choose Me account, including a company profile in which multiple recruiters can be a part of. Recruiters under the same organization will be able to view all candidates that have connected to a recruiter under that organization. Organizations may choose to post announcements or new job postings, in which potential candidates can subscribe to. 3. Interest tags – Tags/keywords candidates can add to his/her profile to personalize what jobs you are looking for, and what they are interested in. Recruiters can use tag searching features to search for candidates. 4. Candidate selection - Recruiters that want to get in touch with certain candidates may directly contact them via the app or through other means by clicking on their name in the list view.
winning
## Inspiration GET RICH QUICK! ## What it does Do you want to get rich? Are you tired of the man holding you down? Then, WeakLink.Ai is for you! Our app comes equipped with predictive software to suggest the most beneficial stocks to buy based off your preferences. Simply said, a personal stock broker in your pocket. ## How we built it Weaklink.Ai front end is built using the Dash framework for python. The partnered transactions are preformed with the assistance of Standard Library where our back end calculation engine uses modern machine learning techniques to make decisions about the best time to buy or sell a specific stock. Confirmation is sent to the user's mobile device via Twilio. Upon confirmation the workflow will execute the buy or sell transaction. The back end engine was custom built in python by one of our engineers. ## Challenges we ran into It was difficult to scrape the web for precise data in a timely and financially efficient fashion. It was very challenging to integrate Blockstack into a full python environment. The front end design was reformatted several times. There was some learning curves adjusting to never before seen or used api. Finding financially efficient solutions to some api ## Accomplishments that we're proud of Despite the various challenges we are proud of our project. The front was more visually appealing than anticipated. The transition from back end calculations to visual inspection was relatively seamless. This was our first time working with each other and we had very good synergy, we were able to divide up the work and support one another along the way each taking part in touching each aspect of the project. ## What we learned The various api available as well as some of their limitations. We discovered that open source api is often more helpful than a closed source black box. We also learned a lot about data security via Blockstack. Lastly we learned about various ways to interpret and analyze stocks in a quantitative fashion. ## What's next for WeakLink.Ai There is a lot of work left for us. The most immediate priority would be to set up trend analysis based on historical data of the user followed by more customization options. A place for the user to describe their desires and our machine learning algorithm will take that information into account in order to recommend actions of the user which is in their best interest.
## ✨ Inspiration Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions. ## 🚀 What it does Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well. ## 🔧 How we built it When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen. ## 🤯 Challenges we ran into Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too. ## 🏆 Accomplishments that we're proud of Connecting a prompt to a well-crafted stocks portfolio. learning MATLAB in a time crunch. connecting all of our API's successfully making a website that we believe has serious positive implications for this world ## 🧠 What we learned MATLAB integration Flask Integration Gemini API ## 🚀What's next for StockSee * Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way. * Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time * Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
## Inspiration We have to make a lot of decisions, all the time- whether it's choosing your next hackathon project idea, texting your ex or not, writing an argumentative essay, or settling a debate. Sometimes, you need the cold hard truth. Sometimes, you need someone to feed into your delusions. But sometimes, you need both! ## What it does Give the Council your problem, and it'll answer with four (sometimes varying) AI-generated perspectives! With 10 different personalities to choose from, you can get a bunch of (imaginary) friends to weigh in on your dilemmas, even if you're all alone! ## How we built it The Council utilizes OpenAI's GPT 3.5 API to generate responses unique to our 10 pre-defined personas. The UI was built with three.js and react-three-fiber, with a mix of open source and custom-built 3D assets. ## Challenges we ran into * 3D hard * merge conflict hard * Git is hard ## Accomplishments that we're proud of * AI responses that were actually very helpful and impressive * Lots of laughs from funny personalities * Custom disco ball (SHEEEEEEEEESH shoutout to Alan) * Sexy UI (can you tell who's writing this) ## What we learned This project was everyone's first time working with three.js! While we had all used OpenAI for previous projects, we wanted to put a unique spin on the typical applications of GPT. ## What's next for The Council We'd like to actually deploy this app to bring as much joy to everyone as it did to our team (sorry to everyone else in our room who had to deal with us cracking up every 15 minutes)
partial
# SmartKart A IoT shopping cart that follows you around combined with a cloud base Point of Sale and Store Management system. Provides a comprehensive solution to eliminate lineups in retail stores, engage with customers without being intrusive and a platform to implement detailed customer analytics. Featured by nwHacks: <https://twitter.com/nwHacks/status/843275304332283905> ## Inspiration We questioned the current self-checkout model. Why wait in line in order to do all the payment work yourself!? We are trying to make a system that alleviates much of the hardships of shopping; paying and carrying your items. ## Features * A robot shopping cart that uses computer vision to follows you! * Easy-to-use barcode scanning (with an awesome booping sound) * Tactile scanning feedback * Intuitive user-interface * Live product management system, view how your customers shop in real time * Scalable product database for large and small stores * Live cart geo-location, with theft prevention
## Inspiration An individual living in Canada wastes approximately 183 kilograms of solid food per year. This equates to $35 billion worth of food. A study that asked why so much food is wasted illustrated that about 57% thought that their food goes bad too quickly, while another 44% of people say the food is past the expiration date. ## What it does LetsEat is an assistant that comprises of the server, app and the google home mini that reminds users of food that is going to expire soon and encourages them to cook it in a meal before it goes bad. ## How we built it We used a variety of leading technologies including firebase for database and cloud functions and Google Assistant API with Dialogueflow. On the mobile side, we have the system of effortlessly uploading the receipts using Microsoft cognitive services optical character recognition (OCR). The Android app is writen using RxKotlin, RxAndroid, Retrofit on a MVP architecture. ## Challenges we ran into One of the biggest challenges that we ran into was fleshing out our idea. Every time we thought we solved an issue in our concept, another one appeared. We iterated over our system design, app design, Google Action conversation design, integration design, over and over again for around 6 hours into the event. During development, we faced the learning curve of Firebase Cloud Functions, setting up Google Actions using DialogueFlow, and setting up socket connections. ## What we learned We learned a lot more about how voice user interaction design worked.
# nwhacks\_protestreport This website takes information about a recent protest and searches twitter for relevant tweets before and afterwards, then performs sentiment analysis using [VADER](https://github.com/cjhutto/vaderSentiment) to give you info about the effect of your protest on public opinion.
winning
## Inspiration The biggest irony today is despite the advent of the internet, students and adults are more oblivious than ever to world events, and one can easily understand why. Of course, Facebook, YouTube, and League will be more interesting than reading Huffington Post; coupled with the empirical decrease in the attention span of younger generations, humanity is headed towards disaster. ## What it does Our project seeks to address this crisis by informing people in a novel and exciting way. We create a fully automated news extraction, summarization, and presentation pipeline that involves an AI-anime character news anchor. The primary goal of our project is to engage and educate an audience, especially that of younger students, with an original, entertaining venue for encountering reliable news that will not only foster intellectual curiosity but also motivate them to take into deeper consideration of relevant issues today, from political events to global warming. The animation is basically a news anchor talking about several recent news, where related news is discussed in a short blurb. ## Demo Video Explanation The demo video generally performs well, except for the first few seconds and the Putin/Taliban part. This is because the clusters are too small so many clusters get merged together as our kmeans has fixed number of clusters. A quick fix is to simply calculate the internal coherence of the cluster and filter based on that. more advanced methods can be based on those described in the Scatter Gather paper by Karger et al. ## How we built it ### News Summarization For extraction and summarization, our first web scrapes news articles from trusted sources (CNN, New York Times, Huffington Post, Washington Post, etc…) to obtain the texts of recent news articles. Then it generates a compact summary of these texts using an in-house developed two-tier text summarization algorithm based on state-of-the-art natural language processing techniques. The algorithm first does an extractive summarization of individual articles. Next, it computes an overall 'topic feature' embedding. This embedding is used to cluster related news, and the final script is generated using these clusters and DL-based abstractive summarization. ### News Anchor Animation Furthermore, using the google cloud text-to-speech API, we generate speech with our custom pitch and preferences and we then have code that generates a video using an image of any interesting, popular anime character. In order for the video to feel natural to the audience, we accounted for accurate lip and facial movement; there are calculations made using specific speech traits of the .wav file that produces realistic and not only educational but also humorous videos that will entertain the younger audience. ### Audience Engagement Moreover, we wrote code using the Twitter API to automate the process of uploading videos to our Twitter account MinervaNews which is integrated within the project’s server that uploads a video initially when the server starts and automatically generates every 24 hours after a new video using the new articles from the sources. ## What's next for Minerva Daily News Reporter Our project will have a lasting impact on the education of an audience ranging in all age groups. Anime is one great example of a venue that can broadcast news, and we selected anime characters as a humorous and eye-catching means to educate the younger audience. Our project and its customization allow for the possibility of new venues and greater exploration of making education more fun and accessible to a vast audience. We hope to take our project further and add more animations as well as more features. ## Challenges Our compute platform, Satori has a unique architecture called IBM ppe64le that makes package and dependency management a nightmare. ## What we learned 8 hours in planning = 24 hours in real time. ## Github <https://github.com/gtangg12/liszt>
## Inspiration We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format. ## What it does "Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content. ## How we built it Our project is comprised of many interconnected components, which we detail below: **Lecture Detection** Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly. **Text Summarization** We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step. **Post-Processing and Formatting** Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point. **Key Concept Suggestions** To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases. **Front-End** The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model. ## Challenges we ran into 1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points. 2. Our NLP model is quite large, which made it difficult to host on cloud platforms ## Accomplishments that we're proud of 1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. 2) Working on an unsolved machine learning problem (lecture simplification) 3) Real-time text analysis to determine new elements ## What we learned 1) First time for multiple members using Flask and doing web development 2) First time using Google Cloud Platform API 3) Running deep learning models makes my laptop run very hot ## What's next for Summaread 1) Improve our summarization model through improving data pre-processing techniques and decreasing run time 2) Adding more functionality to generated outlines for better user experience 3) Allowing for users to set parameters regarding how much the lecture is condensed by
## Inspiration Managing tasks can be a challenge for many, particularly those with ADHD, this process can feel overwhelming, making it hard to stay on top of everything. For us, this struggle was all too familiar. As students who live with ADHD, we often found traditional task management tools lacking. They didn’t offer the flexibility we needed to handle sudden changes, and they rarely broke down tasks into digestible, actionable steps, leaving us feeling overwhelmed and disorganized. ## What it does Our app is designed to break down complex projects into easy, manageable tasks, and automatically scheduling them into your calendar. Whether your plans shift unexpectedly or you need to focus on one step at a time. Designed with the flexibility to rearrange and reschedule tasks easily, FoxList ensures you can adapt without missing a beat. Whether your day takes an unexpected turn or you’re feeling overwhelmed by a large project, our app provides clear, step-by-step guidance, making it easier to focus and stay on track. We are building the tool we always needed, and we’re excited to share it with you. ## How we built it Our first step was creating a task list. This task list would be what we would build off of. We utilized react for the front end and used the next.js framework. Our team used Google's Gemini API to help break down user inputted tasks in our application. ## Challenges we ran into As our entire team were beginners and this being our first hackathon, we had to learn all the tech stacks we used from scratch. We struggled with setting up the project environment and making sure all proper packages were installed. Another struggle we faced was implementing APIs. It was our first time working with an AI api and there was a learning curve for all of us. We spent a large amount of time figuring out how to connect the API to the front end elements. ## Accomplishments that we're proud of As a team of beginners we had an ambitious vision to build a web application that could be useful for many others. Our project will extend beyond this hackathon. ## What we learned We learned that we should install dependencies/research tech stack BEFORE the hackathon so we could utilize all our time to build our project. Task delegation is something we learned as well. Tasks should have been allocated/delegated as soon as possible. Our quick ideation phase was a great move/great idea! Got to have a lot of time focusing on how to implement instead of theorizing. Talking with the mentors was a big positive; sped up development by a lot and got great advice quickly. ## What's next for FoxList We believe FoxList has the potential to make task management easier and more accessible for many people, especially those who struggle with traditional planning tools. Our goal is to keep building and refining features that help users break down complex tasks, adapt to changing schedules, and stay focused on what matters most. In the coming months, we plan to implement even more customization options, improve AI-driven task breakdowns, and enhance scheduling flexibility. We’re excited about the future of FoxList and can’t wait to see how it can support others in their journey to better productivity.
winning
## Inspiration Wanted to create something fun that would be a good use of Snapchat's SnapKit! Did not get to it, but the idea was sharing quotes and good reads could be pretty neat between friends - recommending novels is as easy as using the app which interfaces directly into Snapchat! It could also become kind of a Yelp for reading aficionados, with a whole sharing community, and can even become an book/e-book commerce market! ## What it does Allows you to save your personal favourite reads, allowing you to write down thoughts like a diary. ## How I built it Android studio! ## Challenges I ran into Wanted to use React Native, but not possible on the LAN here, and iteration times was slow with Android studio. ## Accomplishments that I'm proud of Learned android dev! ## What I learned React Native Dev environment, Android Studio Development, Snapkit ## What's next for ReadR
## Inspiration Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that. ## What it does By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels. ## How we built it On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database. On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text. ## Challenges we ran into Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated. ## Accomplishments that we're proud of Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API. ## What we learned We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered. ## What's next for Immersify The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist.
## Inspiration Despite it being the legal right of students to be included in the classroom, students with learning disabilities are often left out. To include students with learning disabilities, teachers have to create custom assignments based on student's Individualized Education Plan (IEP) that integrate with the flow of the larger class. However, teachers are low on time and resources to accommodate and include these students. In addition to lawsuits, this has led to student's with learning disabilities being left out and having exclusion---rather than inclusion---define their educational experience. We designed an application to automate accommodating students, curating interactive learning experiences from existing assignments that adjust to students' needs according to their IEP with minimal input from the teacher. See more about our team member's (Tyler) personal connection to our mission in the attached video. ## What it does modifai (pronounced modify) is a platform that automates transforming K-12 assignments to adjust to students with IEPs, turning plain assignments into guided interactive experiences that allow students to follow the class' curriculum without feeling left out while offloading burden off of the teacher. We take two documents: (1) A student's Individualized Learning Plan (IEP); and (2) an assignment a teacher made, and generate a personalized assignment according to the student's IEP (personal learning plan) that fits their unique requirements for learning. Given that having an assignment read aloud is one of the most common academic accommodations, students are also able to step through the modified assignment while hearing auto-generated narration along the way in their teacher's voice, with detection integrated to detect confusion and ensure attention and allowing the student to get questions answered by a fine-tuned LLM for the task when needed. ## How we built it * React * Flask * Hume (for detecting when clarification is needed for a student) * OpenAI API * OpenAI Fine-tuning ## Challenges we ran into * 70+ page long IEPs to effectively parse to generate custom assignment modifications from * Constructing a good dataset for fine-tuning * Choosing the right parts of the document in order to provide feedback * Creating document specific techniques for creating semantically coherent chunks to provide appropriate context to the large-language model. * Merge conflicts * Committing too fast (OPEN AI API keys) * Lots of experimentation leading to a disorganized code base ## Accomplishments that we're proud of We designed a retrieval pipeline for documents, often longer than 75 pages, using an abstractive technique inspired by RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval) -- effectively incorporating our passion for AI research into a real world use case that makes the world a more equitable place. We also designed a pipeline to segment a text into digestible logical units, so that we can generate annotations, vocal narrations --- this way the student can interactively walk through the assignment with instruction and helpful hints read aloud. We believe the contribution of many apps that leverage language models (like ours) are in how well they reduce friction of use. We are incredibly proud of our user interface and believe it is a core component of the value of our system. Being able to provide so much functionality in a simple interface is a great achievement on top of the technical functionality we were able to create in 24 hours. Although we did not fully leverage synthetic data generation for fine-tuning, we developed a proof of concept for generating data to fine-tune GPT-3.5. This aimed to speed up our output generation while maintaining accuracy and producing more structured results. ## What we learned * How to integrate Hume! * To check `.gitignore` for `.env` * How to construct a data set for fine-tuning ## What's next for modifai.co We have built proof-of-concept for our startup modifai in 24 hours. We are eager to get this product into the hands of teachers to make the classroom a more equitable place to learn -- including students across ability levels and improving educational outcomes. The modifai team seeks additional resources to get the world closer to our mission of making academic accommodation easy. ## Who we are Tyler Katz: Masters student at Carnegie Mellon University studying Computational Biology, interning as a Software Engineer at Genentech. Julius Arolovitch: Incoming junior in Electrical and Computer Engineering & Robotics at Carnegie Mellon University with research interests in grasping and motion planning for manipulation. Summer intern at Johnson & Johnson working on Ottava. Quentin Romero Lauro: Junior CS major at University of Pittsburgh, working in the EPIC Data Lab at UC Berkeley on techniques to improve retrieval-augmented generation pipelines. Benjamin Kleyner: Sophomore CS major at Carnegie Mellon University, interning at Insitro as a software engineer.
partial
# Histovibes - A Journey Through Time ## Inspiration The inspiration for Histovibes arose from a shared yearning within our team for a transformative and engaging history education experience. As enthusiasts of the past, we often found traditional textbooks lacking in both interactivity and a modern touch. Drawing on the theme of nostalgia for the hackathon, we envisioned a platform that not only encapsulates the richness of historical narratives but also integrates cutting-edge technology to make learning history a thoroughly immersive and enjoyable endeavour. The desire to recreate the joy of studying history, coupled with a futuristic yet nostalgic user interface, fuelled our determination to craft Histovibes. This project is a testament to our collective passion for reshaping the way people perceive and interact with history, offering a dynamic and personalized learning journey that bridges the gap between the past and the present. ## What it does Histovibes isn't just a platform; it's a time capsule. Craft timelines, explore auto-generated quizzes, and engage in discussions. The fusion of a React frontend, Flask backend, and Okta authentication brings history to life. MongoDB stores the richness of the past, making Histovibes a dynamic, user-centric learning haven. It is designed to transform the way individuals learn and interact with history. At its core, Histovibes enables users to create and explore personalized timelines, allowing them to curate historical events that resonate with their interests. The platform goes beyond static content by incorporating dynamic features such as auto-generated quizzes and interactive discussions, fostering an engaging and participatory learning environment. With a clean and futuristic user interface that seamlessly blends nostalgia with innovation, Histovibes transcends conventional history textbooks, providing a captivating space for users to reminisce about the past while actively shaping their historical learning journey. Powered by a React frontend and a Flask backend, Histovibes leverages Okta for user authentication, MongoDB for data storage, and integrates advanced technologies like OpenAI and Cohere LLM to enhance the intelligence and interactivity of the platform. In essence, Histovibes redefines history education by combining the best of modern technology with the timeless allure of the past. ## How we built it We meticulously crafted Histovibes with the dexterity of a storyteller. React's elegance, Flask's resilience, and Okta's security dance seamlessly. MongoDB, our digital archive, ensures a smooth narrative flow. Histovibes is the symphony of technology playing in harmony. ### React The frontend, developed with React, embodies a futuristic and user-friendly interface, offering seamless navigation and a visually appealing design that evokes a sense of nostalgia. ### Flask Complementing this, the backend, powered by Flask, ensures robust functionality and efficient data handling. ### Okta and MongoDB The integration of Okta provides a secure and streamlined authentication process, while MongoDB serves as the dynamic storage solution for user-generated timelines, events, and discussions. ### OpenAI and Co:Here What sets Histovibes apart is its intelligent core, incorporating OpenAI and Cohere LLM to enhance the learning experience. OpenAI's large language models contribute to the creation of auto-generated quizzes, enriching the platform with dynamic assessments. Additionally, Cohere LLM adds sophistication to user discussions, offering context-aware insights. ## Challenges we ran into ### React-Flask Integration Throughout the development journey of Histovibes, our team encountered several formidable challenges that tested our problem-solving skills and collaborative spirit. One significant hurdle emerged during the integration of React and Flask, where ensuring seamless communication between the frontend and backend proved intricate. Designing the user interface, initially conceptualized on Figma, presented its own set of challenges as we navigated the transition from design to implementation using Flutter. Compromises were made to streamline user-friendly features within our time constraints, leading to a minimalistic design that balanced functionality and aesthetics. ### Linking Everything Together Linking various components of our tech stack posed another substantial challenge. Splitting responsibilities within the team, each working on different aspects, led to recurring merge conflicts during the linking process. Debugging this intricate linking system consumed a significant portion of our hackathon time, underscoring the importance of robust collaboration and version control. Despite these challenges, our team's perseverance prevailed, resulting in a cohesive and sophisticated Histovibes platform that seamlessly integrates React, Flask, Okta, MongoDB, OpenAI, and Cohere LLM. These challenges became valuable lessons, highlighting the intricate dance between technology, design, and teamwork in creating a dynamic history learning experience. ## Accomplishments that we're proud of Throughout the development journey of Histovibes, our team achieved a multitude of significant milestones that we take immense pride in. Foremost among these accomplishments is the seamless integration of OpenAI and Cohere LLM, which elevated the platform's intelligence and interactivity. Our implementation of OpenAI facilitated the creation of auto-generated quizzes, providing users with dynamic and engaging assessments. Simultaneously, the incorporation of Cohere LLM into user discussions offered context-aware insights, enriching the overall learning experience. The successful orchestration of these advanced technologies underscores our commitment to infusing innovation into history education. Our team also overcame challenges in UI design, striking a balance between functionality and aesthetics, resulting in a sleek and user-friendly interface. Looking back, Histovibes is not just a project; it's a testament to our dedication, innovation, and the successful execution of a vision that merges the best of technology with the timeless allure of history. ## What we learned The development journey of Histovibes unfolded as a profound learning experience for our team. Navigating the integration of React and Flask, we gained valuable insights into optimizing the synergy between frontend and backend technologies. The collaborative effort to link various components of our tech stack uncovered the importance of meticulous planning and communication within the team. We learned that debugging, especially when linking diverse technologies, can be a challenging but essential part of the development process. Overall, the journey of creating Histovibes equipped us with a versatile skill set, from tech-specific knowledge to effective problem-solving strategies, showcasing the immense growth and resilience of our team. ## What's next for Histovibes 1. **Content Expansion:** Broaden historical coverage with additional events, timelines, and cultural perspectives. 2. **Enhanced Quiz Functionality:** Develop adaptive learning algorithms for personalized quizzes based on user progress. 3. **Community Building:** Introduce discussion forums and collaborative projects to foster a vibrant user community. 4. **Mobile Optimization:** Ensure Histovibes is accessible on various devices, allowing users to learn on the go. 5. **Integration of Multimedia:** Enhance learning with videos, interactive maps, and images for a more immersive experience. 6. **User Feedback Mechanism:** Implement a system for user feedback to continuously improve and refine the platform. 7. **Educational Partnerships:** Collaborate with educational institutions to integrate Histovibes into formal curricula.
## Inspiration As we began thinking about potential projects to make, we realized that there was no real immersive way to speak to those that have impacted the world in a major way. It is just not as fun to look up Wikipedia articles and simply read the information that is presented there, especially for the attention deficient current generation. Thinking of ways to make this a little more fun, we came up with the idea of bringing these characters to life, in order to give the user the feeling that they are actually talking and learning directly from the source(s), the individual(s) that actually came up with the ideas that the users are interested in. In terms of the initial idea, we were inspired by the Keeling Curve, where we wanted to talk to Charles David Keeling, who unfortunately passed away in 2005, about his curve. ## What it does Our application provides an interactive way for people to learn in a more immersive manner about climate change or other history. It consists of two pages, the first in which the user can input a historical character to chat with, and the second to "time travel" into the past and spectate on a conversation between two different historical figures. The conversation utilizes voice as input, but also displays the input and the corresponding response on the screen for the user to see. ## How we built it The main technologies that we used are the Hume AI, Intel AI, Gemini, and VITE (a react framework). Hume AI is used for the text and voice generation, in order to have the responses be expressive, which would hopefully engage the user a bit more. Intel AI is used to generate images using Stable Diffusion to accompany the text that is generated, again to hopefully increase the immersiveness. Gemini is used to generate the conversations between two different historical figures, in the "time travel" screen. Finally, we used VITE to create a front end that merges everything together and provides an interface to the user to interact with the other technologies that we used. ## Challenges we ran into One challenge we faced was just with the idea generation phase, as it took us a while to polish the idea enough to make this an awesome application. We went through a myriad of other ideas, eventually settling in on this idea of interacting with historical figures, as we believed this would provide the best form of enrichment to a potential user. We also tried switching from Gemini to Open AI, but due to the way that the APIs are implemented, it was unfortunately not as easy to just drop-in replace Open AI everywhere Gemini was used. Thus, we decided that it was best to stick with Gemini, as it still does quite a good job at generating responses for what we require. Another challenge that we faced was the fact that it is quite difficult to manage conversations between different assistants, like for instance in the "time travel" page, where two different historical figures (two different assistants) are supposed to have a productive conversation. ## Accomplishments that we're proud of We are quite proud of the immersiveness of the application. It does really feel as if the user is speaking to the person in question, and not a cheap knockoff trying to pretend to be that person. The assistant is also historically accurate, and does not deviate off of what was requested, such as discussing topics that the historical figure has no possibility of having the knowledge of, such as events or discoveries after they passed away. In addition to this, we are also proud of the features that we managed to include in our final application, such as the ability to change the historical figure that the user wants to talk to, in addition to the "time travel" feature which allows for the user to experience how different historical figures would interact with each other. ## What we learned We would say that the most important skill that we learned was the art of working together as a team. When we had issues or were confused about certain parts of our application, talking through and explaining different parts proved to be quite an invaluable act to perform. In addition to this, we learned how to integrate various APIs and technologies, and making them work together in a seamless fashion in order to make a successful and cohesive application. We also learned the difficult process of coming up with the idea in the first place, especially one that is good enough to be viable. ## What's next for CLIMATE CHANGE IS BEST LEARNED FROM THE EXPERTS THEMSELVES The next steps would be to include more features, such as having a video feed that feels as if the user is video chatting with the historical figure, furthering the immersiveness of our application. It would also definitely be quite nice to figure out Open AI integration, and have the user choose the AI assistant they would like to use in the future.
**Inspiration** The inspiration behind Block Touch comes from the desire to create an interactive and immersive experience for users by leveraging the power of Python computer vision. We aim to provide a unique and intuitive way for users to navigate and interact with a simulated world, using their hand movements to place blocks dynamically. **What it does** Block Touch utilizes Python computer vision to detect and interpret hand movements, allowing users to navigate within a simulated environment and place blocks in a virtual space. The application transforms real-world hand gestures into actions within the simulated world, offering a novel and engaging user experience. **How we built it** We built Block Touch by combining our expertise in Python programming and computer vision. The application uses computer vision algorithms to analyze and interpret the user's hand movements, translating them into commands that control the virtual world. We integrated libraries and frameworks to create a seamless and responsive interaction between the user and the simulated environment. **Challenges we ran into** While developing Block Touch, we encountered several challenges. Fine-tuning the computer vision algorithms to accurately recognize and interpret a variety of hand movements posed a significant challenge. Additionally, optimizing the application for real-time responsiveness and ensuring a smooth user experience posed technical hurdles that we had to overcome during the development process. **Accomplishments that we're proud of** We are proud to have successfully implemented a Python computer vision system that enables users to control and interact with a simulated world using their hand movements. Overcoming the challenges of accurately detecting and responding to various hand gestures represents a significant achievement for our team. The creation of an immersive and enjoyable user experience is a source of pride for us. **What we learned** During the development of Block Touch, we gained valuable insights into the complexities of integrating computer vision into interactive applications. We learned how to optimize algorithms for real-time performance, enhance gesture recognition accuracy, and create a seamless connection between the physical and virtual worlds. **What's next for Block Touch** In the future, we plan to expand the capabilities of Block Touch by incorporating more advanced features and functionalities. This includes refining the hand gesture recognition system, adding new interactions, and potentially integrating it with virtual reality (VR) environments. We aim to continue enhancing the user experience and exploring innovative ways to leverage computer vision for interactive applications.
losing
## Inspiration All three teammates had independently converged on an idea of glasses with subtitles for the world around you. After we realized the impracticality of the idea (how could you read subtitles an inch from your eye without technology that we didn't have access to?) we flipped it around: instead of subtitles (with built-in translation!) that only you could see for *everybody else*, what if you could have subtitles for *you* that everyone else could see? This way, others could understand what you were saying, breaking barriers of language, distance, and physical impairments. The subtitles needed to be big so that people could easily read them, and somewhere prominent so people you were conversing with could easily find them. We decided on having a large screen in the front of a shirt/hoodie, which comes with the benefits of wearable tech such as easy portability. ## What it does The device has three main functions. The first is speech transcription in multiple languages, where what you say is turned into text and you can choose the language you're speaking in. The second is speech translation, which currently translates your transcribed speech into English. The final function is displaying subtitles, and your translated speech is displayed on the screen in the front of the wearable. ## How we built it We took in audio input from a microphone connected to a Raspberry Pi 5, which sends packets of audio every 100 ms to the Google Cloud speech-to-text API, allowing for live near real-time subtitling. We then sent the transcribed text to the Google Cloud translate API to translate the text into English. We sent this translated text to a file, which was read from to create our display using pygame. Finally, we sewed all the components into a hoodie that we modified to become our wearable subtitle device! ## Challenges we ran into There were no microphones, so we had to take a trip (on e-scooters!) to a nearby computer shop to buy microphones. We took one apart to be less bulky, desoldering and resoldering components in order to free the base components from the plastic encasing. We had issues with 3D printing parts for different components: at one point our print and the entire 3D printer went missing with no one knowing where it went, and many of our ideas were too large for the 3D printers. Since we attached everything to a hoodie, there were some issues with device placement and overheating. Our Raspberry Pi 5 reached 85 degrees C, and some adapters were broken due to device placement. Finally, a persistent problem we had was using Google Cloud's API to switch between recording different languages. We couldn't find many helpful references online, and the entire process was very complicated. ## Accomplishments that we're proud of We're proud of successfully transcribing text from audio from the taken-apart microphone. We were so proud, in fact, that we celebrated by going to get boba! ## What we learned We learned four main lessons. The first and second were that the materials you have access to can significantly increase your possibilities or difficulty (having the 7" OLED display helped a lot) but that even given limited materials, you still have the ability to create (when we weren't able to get a microphone from the Hardware Hub, we went out and bought a microphone that was not suited for our purposes and took it apart to make it work for us). The third and fourth were that seemingly simple tasks can be very difficult and time-consuming to do (as we found in the Google Cloud's APIs for transcription and translation) but also that large, complex tasks can be broken down into simple doable bits (the entire project: we definitely couldn't have made it possible without everyone taking on little bits one at a time). ## What's next for Project Tee In the future, we hope to make the wearable less bulky and more portable by having a flexible OLED display embedded in the shirt, and adding an alternative power source of solar panels. We also hope to support more languages in the future (we currently support five: English, Spanish, French, Mandarin, and Japanese) both to translate from and to, as well as a possible function to automatically detect what language a user is speaking. As the amount of language options increases, we will likely need an app or website as an option for people to change their language options more easily.
## Inspiration As first year university students, attending university lectures has been a frightening experience. Much different than how it is in high school, with a class of thousands of students, it’s much too easy to lose focus in lectures. One bad night of sleep can lead to valuable information lost and while one may think that only missing one lecture is alright, wait until they see the content they missed on the midterm. That is why we decided to build an application capable of speech to text recognition and a bunch of other useful features such as text summarization and text translation in order to help students with understanding lectures and avoid those days where one may zone out in class. ## What it does Paying attention is hard. Only 6% of the world are native english speakers. heAR uses AR and NLP to help you take notes and understand other people * To help the world understand each other better * To improve the education of students * To help connect the world * So you can focus on the details that matter * So people can talk about the details that matter * To facilitate deeper human connections * To facilitate understanding * To facilitate communication ## How we built it In order to build our project, we first came up with how we wanted our application to look like and what features we would like to implement. That discussion lead us on deciding that we wanted to add an Augmented Reality feature on our application because we felt like it would be more immersive and fun to see summarized notes that you take in AR. To build the UI/UX and augmented reality of the app, we used Unity and C#. In terms of text summarization and text translation, we used Co:here’s and Google Translate’s API in order to achieve this. Using python, we were able to build algorithms that would take in paragraphs and either translate them, summarize them or even both. We decided to add the translation feature because in University, and also in real life situations, not everyone speaks the same language and having that option to understand what people are saying in your own language is very beneficial. ## Challenges we ran into A huge challenge we encountered was having Unity interact with our Python algorithms. The problem we faced was that our product is on a mobile phone and running Python on such device is not really feasible and so we had to come up with a creative way to fix our situation. After some thought, we landed on the idea of creating a backend python server using Flask where our C# code would be able to make request to it and vice versa to retrieve the data we wanted. While the idea seemed very farfetched at first, we slowly tackled the problem by dividing up the work and eventually we were able to get the server running using Heruko. ## Accomplishments that we're proud of A huge accomplishment that we are very proud of is our working demo. The reason why is because in our demo, we have essentially achieved every goal that we have set during the beginning of the hackathon. From registering speech to text in Unity to having text summarization, we have accomplished so much as a team and are very proud of our finished demo. As the project went on, we obviously wanted to add more and more, but just having the feeling of accomplishing our original goals is truly something we will cherish as a team. ## What we learned We have learnt so much from building this project; from improving our existing skills to learning more, we understood what it is like working in a team environment. Not only that, but for all of us, this is either our very first hackathon or first hackathon in person and so we have truly experienced what a hackathon really is and have learnt so much from industry professionals. ## What's next for heAR To be honest, we are not really sure what is next for heAR. We did plan to add more UI/UX and Co:here features and possibly will continue or maybe venture into another topic.
## Inspiration Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world. ## What it does Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation. We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box. ## How we built it For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives: 1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model. 2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users. * Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience. * Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data. * Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app. * NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users. * Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences ## Challenges we ran into Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL. ## Accomplishments that we're proud of We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are: * Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community. * Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement. * User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority. * Speech Recognition: Our success in implementing speech recognition technology is a source of pride. * Multiple AI Models: We also loved merging natural language processing and computer vision in the same application. ## What we learned We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project. ## What's next? We have a ton of ideas planned for Sign Sync next! * Translate between languages other than English * Translate between other sign languages, not just ASL * Native mobile app with no internet access required for more seamless usage * Usage of more sophisticated datasets that can recognize words and not just letters * Use a video image to demonstrate the sign language component, instead of static images
losing
## Inspiration Our team identified two intertwined health problems in developing countries: 1) Lack of easy-to-obtain medical advice due to economic, social and geographic problems, and 2) Difficulty of public health data collection in rural communities. This weekend, we built SMS Doc, a single platform to help solve both of these problems at the same time. SMS Doc is an SMS-based healthcare information service for underserved populations around the globe. Why text messages? Well, cell phones are extremely prevalent worldwide [1], but connection to the internet is not [2]. So, in many ways, SMS is the *perfect* platform for reaching our audience in the developing world: no data plan or smartphone necessary. ## What it does Our product: 1) Democratizes healthcare information for people without Internet access by providing a guided diagnosis of symptoms the user is experiencing, and 2) Has a web application component for charitable NGOs and health orgs, populated with symptom data combined with time and location data. That 2nd point in particular is what takes SMS Doc's impact from personal to global: by allowing people in developing countries access to medical diagnoses, we gain self-reported information on their condition. This information is then directly accessible by national health organizations and NGOs to help distribute aid appropriately, and importantly allows for epidemiological study. **The big picture:** we'll have the data and the foresight to stop big epidemics much earlier on, so we'll be less likely to repeat crises like 2014's Ebola outbreak. ## Under the hood * *Nexmo (Vonage) API* allowed us to keep our diagnosis platform exclusively on SMS, simplifying communication with the client on the frontend so we could worry more about data processing on the backend. **Sometimes the best UX comes with no UI** * Some in-house natural language processing for making sense of user's replies * *MongoDB* allowed us to easily store and access data about symptoms, conditions, and patient metadata * *Infermedica API* for the symptoms and diagnosis pipeline: this API helps us figure out the right follow-up questions to ask the user, as well as the probability that the user has a certain condition. * *Google Maps API* for locating nearby hospitals and clinics for the user to consider visiting. All of this hosted on a Digital Ocean cloud droplet. The results are hooked-through to a node.js webapp which can be searched for relevant keywords, symptoms and conditions and then displays heatmaps over the relevant world locations. ## What's next for SMS Doc? * Medical reports as output: we can tell the clinic that, for example, a 30-year old male exhibiting certain symptoms was recently diagnosed with a given illness and referred to them. This can allow them to prepare treatment, understand the local health needs, etc. * Epidemiology data can be handed to national health boards as triggers for travel warnings. * Allow medical professionals to communicate with patients through our SMS platform. The diagnosis system can be continually improved in sensitivity and breadth. * More local language support [1] <http://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/> [2] <http://www.internetlivestats.com/internet-users/>
## Inspiration While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry. ## What it does Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance. ## How I built it In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population. In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests. In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C. ## Challenges I ran into When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience. ## Accomplishments that I'm proud of For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of. We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat! ## What I learned We learned the basics of front-end and back-end development as well as how to make designs more accessible. ## What's next for MediStand Integrate the various features of this prototype. How can we make this a global hack? MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare) Finding more ways to make this product more accessible
## Inspiration What would you do with 22 hours of your time? I could explore all of Ottawa - from sunrise at parliament, to lunch at Shawarma palace, and end the night at our favourite pub, Heart and Crown! But imagine you hurt your ankle and go to the ER. You're gonna spend that entire 22 hours in the waiting room, before you even get to see a doctor for this. This is a critical problem in our health care system. We're first year medical students, and we've seen how much patients struggle to get the care they need. From the overwhelming ER wait time, to travelling over 2 hours to talk to a family doctor (not to mention only 1/5 Canadians having a family doctor), Canada's health care system is currently in a crisis. Using our domain knowledge, we wanted to take a step towards solving this problem. ## What is PocketDoc? PocketDoc is your own personal physician available on demand. You can talk to it like you would to any other person, explaining what you're feeling, and PocketDoc will inform you what you may be experiencing at the moment. But can't WedMD do that? No! Because our app actually uses your personalized portfolio - consisting of user inputed vaccinations, current medications, allergies, and more, and PocketDoc can use that information to figure out the best diagnosis for your body. It tells you what your next steps are: go to your pharmacist who can now in Ontario, prescribe the appropriate medication, or maybe use your puffer for an acute allergic reaction, or maybe you do need to go to the ER. But wait, it doesn't stop there! PocketDoc uses your location to find the closest walk-in clinics, pharmacies, and hospitals - and its all in one app! ## How we built it We've all dealt with the healthcare system in Canada, and with all the pros it offers, there are also many cons. From the perspective of a healthcare provider, we recognized that a more efficient solution is feasible. We used a dataset from Kaggle which provided long text data on symptoms, and the associated diagnosis. After trying various ML systems for classification, we decided to Cohere to implement a natural language processing model to classify any user input into one of 21 possible diagnoses. We further used XCode to implement login and used Auth0 to provide an authenticated login experience and ensure users feel safe inputing and storing their data on the app. We fully prototyped our app in Figma to show the range of functionalities we wish to implement beyond this hackathon. ## Challenges we ran into We faced challenges at every step of the design and implementation process. As computer science beginners, we took on a ML-based classification task that required a lot of new learning. The first step was the most difficult: choosing a dataset. There were many ML systems we were considering, such as Tensor Flow, PyTorch, Keras, Scikid-learn, and each one worked best with a certain type of dataset. The dataset we chose also had to give use verified diagnoses for a set of symptoms, and we narrowed it down to 3 different sets. Choosing one of these sets took up a lot of time and effort. The next challenge we faced occurred due to cross-platform incompatibility, where Xcode was used for app development but the ML algorithm was built on python 3. A huge struggle was bringing this model to run on the app directly. We found our only solution was to build a python API that can be accessed by Xcode, a task that we had no time to learn and implement. Hardware was also a bottleneck for our productivity. With limited storage and computing power on our devices, we were compelled to use smaller datasets and simpler algorithms. This used up lots of time and resources as well. The final and most important challenge was the massive learning curve under the short time constraints. For the majority of our team, this was our first hackathon and there is a lot to learn about the hackathon expectations/requirements while also learning new skills on the fly. The lack of prior knowledge made it difficult for us to manage resources efficiently throughout the 36 hours. This brought on more unexpected challenges throughout the entire process. ## Accomplishments that we're proud of As medical students, we're proud to have been introduced to the field of computer science and the intersection between computer science and medicine as this will help us become well-versed and equipped physicians. **Project Planning and Ideation**: Our team spent the initial hours of the hackathon discussing various ideas using the creative design process and finally settled on the healthcare app concept. Together, we outlined the features and functionalities the app would offer, considering user experience and technical feasibility. **Learning and Skill Development**: Since this was our first time coding, we embraced the opportunity to learn new programming languages and technologies. We used our time carefully to learn from tutorials, online resources, and guidance from hackathon mentors. **Prototype Development**: Despite the time constraints, we worked hard to develop a functional prototype of the app. We divided and conquered -- some team members focused on front-end development including designing the user interface and implementing navigation elements while others tackled back-end tasks like cleaning up the dataset and building our machine learning model. **Iterative Development and Feedback**: We worked tirelessly on the prototype based on feedback from mentors and participants. We remained open to suggestions for improvement to enhance the app's functionality. **Presentation Preparation**: As the deadline rapidly approached, we prepared a compelling presentation to showcase our project to the judges using the skills we learned from the public speaking workshop with Ivan Wanis Ruiz. **Final Demo and Pitch**: In the final moments of the hackathon, we confidently presented our prototype to the judges and fellow participants. We demonstrated the key functionalities of the app, emphasizing its user-friendly design and its potential to improve the lives of individuals managing chronic illnesses. **Reflection**: The hackathon experience itself has been incredibly rewarding. We gained valuable coding skills, forged strong bonds with our teammates, and contributed to a meaningful project with real-world applications. Specific tasks: 1. Selected a high quality medical-based dataset that was representative of the Canadian patient population to ensure generalizability 2. Learned Cohere AI through YouTube tutorials 3. Learned Figma through trial and error and YouTube tutorials 4. Independently used XCode 5. Learned a variety of ML systems - Tensor Flow, PyTorch, Keras, Scikid-learn 6. Acquired skills in public speaking to captivate and audience with our unique solution to enhance individual quality of life, improve population health, and streamline the use of scarce healthcare resources. ## What we learned 1. Technical skills in coding, problem-solving, and utilizing development tools. 2. Effective time management under tight deadlines. 3. Improved communication and collaboration within a team setting. 4. Creative thinking and innovation in problem-solving. 5. Presentation skills for effectively showcasing our project. 6. Resilience and adaptability in overcoming challenges. 7. Ethical considerations in technology, considering the broader implications of our solutions on society and individuals. 8. Experimental learning by fearlessly trying new approaches and learning from both successes and failures. Most importantly, we developed a passion for computer science and we’re incredibly eager to build off our skills through future independent projects, hackathons, and internships. Now more than ever, with rapid advancements in technology and the growing complexity of healthcare systems, as future physicians and researchers we must embrace computational tools and techniques to enhance patient care and optimize clinical outcomes. This could be through Electronic Health Records (EHR) management, data analysis and interpretation, diagnosing complex medical conditions using machine learning algorithms, and creating clinician decision support systems with evidence-based recommendations to improve patient care. ## What's next for PocketDoc Main goal: connecting our back end with our front end through an API NEXT STEPS **Enhancing Accuracy and Reliability**: by integrating more comprehensive medical databases, and refining the diagnostic process based on user feedback and real-world data. **Expanding Medical Conditions**: to include a wider range of specialties and rare diseases. **Integrating Telemedicine**: to facilitate seamless connections between users and healthcare providers. This involves implemented features including real-time video consultations, secure messaging and virtual follow-up appointments. **Personalizing Health Recommendations**: along with preventive care advice based on users' medical history, lifestyle factors, and health goals to empower users to take control of their health and prevent health issues before they arise. This can decrease morbidity and mortality. **Health Monitoring and Tracking**: this would enable users to monitor their health metrics, track progress towards health goals, and receive actionable insights to improve their well-being. **Global Expansion and Localization**: having PocketDoc available to new regions and markets along with tailoring the app to different languages, cultural norms, and healthcare systems. **Partnerships and Collaborations**: with healthcare organizations, insurers, pharmaceutical companies, and other stakeholders to enhance the app's capabilities and promote its adoption.
winning
# PotholePal ## Pothole Filling Robot - UofTHacks VI This repo is meant to enable the Pothole Pal proof of concept (POC) to detect changes in elevation on the road using an ultrasonic sensor thereby detecting potholes. This POC is to demonstrate the ability for a car or autonomous vehicle to drive over a surface and detect potholes in the real world. Table of Contents 1.Purpose 2.Goals 3.Implementation 4.Future Prospects **1.Purpose** By analyzing city data and determining which aspects of city infrastructure could be improved, potholes stood out. Ever since cities started to grow and expand, potholes have plagued everyone that used the roads. In Canada, 15.4% of Quebec roads are very poor according to StatsCan in 2018. In Toronto, 244,425 potholes were filled just in 2018. Damages due to potholes averaged $377 per car per year. There is a problem that can be better addressed. In order to do that, we decided that utilizing Internet of Things (IoT) sensors like the ulstrasound sensor, we can detect potholes using modern cars already mounted with the equipment, or mount the equipment on our own vehicles. **2.Goals** The goal of the Pothole Pal is to help detect potholes and immediately notify those in command with the analytics. These analytics can help decision makers allocate funds and resources accordingly in order to quickly respond to infrastructure needs. We want to assist municipalities such as the City of Toronto and the City of Montreal as they both spend millions each year assessing and fixing potholes. The Pothole Pal helps reduce costs by detecting potholes immediately, and informing the city where the pothole is. **3.Implementation** We integrated an arduino on a RedBot Inventors Kit car. By attaching an ultrasonic sensor module to the arduino and mounting it to the front of the vehicle, we are able to detect changes in elevation AKA detect potholes. After the detection, the geotag of the pothole and an image of the pothole is sent to a mosquito broker, which then directs the data to an iOS app which a government worker can view. They can then use that information to go and fix the pothole. ![](https://i.imgur.com/AtI0mDD.jpg) ![](https://i.imgur.com/Lv1A5xf.png) ![](https://i.imgur.com/4DD3Xuc.png) **4.Future Prospects** This system can be further improved on in the future, through a multitude of different methods. This system could be added to mass produced cars that already come equipped with ultrasonic sensors, as well as cameras that can send the data to the cloud for cities to analyze and use. This technology could also be used to not only detect potholes, but continously moniter road conditions and providing cities with analytics to create better solutions for road quality, reduce costs to the city to repair the roads and reduce damages to cars on the road.
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
## Inspiration We noticed one of the tracks involved creating a better environment for cities through the use of technology, also known as making our cities 'smarter.' We observed in places like Boston & Cambridge, there are many intersections with unsafe areas for pedestrians and drivers. **Furthermore, 50% of all accidents occur at Intersections, according to the Federal Highway Administration**. This can prove to be enhanced with careless drivers, lack of stop signs, confusing intersections, and more. ## What it does This project uses a Raspberry Pi to predict potential dangerous driving situations. If we deduce that a potential collision can occur, our prototype will start creating a 'beeping' sound loud enough to gain the attention of those surrounding the scene. Ideally, our prototype will be attached onto traffic poles, similar to most traffic cameras. ## How we built it We utilized a popular Computer Vision library known as OpenCV, in order to visualize our problem in Python. A demo of our prototype is shown in the GitHub repository, with a beeping sound occurring when the program finds a potential collision. Our demonstration is built using Raspberry Pi & a Logitech Camera. Using Artificial Intelligence, we capture the current positions of cars, and calculate their direction and velocity. Using this information, we predicted potential close calls and accidents. In such a case, we make a beeping sound simulating a alarm to notify drivers and surrounding participants. ## Challenges we ran into One challenge we ran into was detecting the car positions based on the frames in a reliable fashion. A second challenge was calculating the speed and direction of vehicles based on the present frame & the previous frames. A third challenge included being able to determine if two lines are crossing based on their respective starting and ending coordinates. Solving this proved vital in order to make sure we alerted those in the vicinity in a quick and proper manner. ## Accomplishments that we're proud of We are proud that we were able to adapt this project to multiple levels. Even putting the camera up to a screen of a real collision video off Youtube resulted in the prototype alerting us of a potential crash **before the accident occurred**. We're also proud of the fact that we were able to abstract the hardware and make the layout of the final prototype aesthetically pleasing. ## What we learned We learned about the potential of smart intersections, and the benefits it can provide in terms of safety to an ever advancing society. Surely, our implementation will be able to reduce the 50% of collisions that occur at intersections by making those around the area more aware of potential dangerous collisions. We also learned a lot about working with openCV and Camera Vision. This was definitely a unique experience, and we were even able to walk around the surrounding Harvard campus, trying to get good footage to test our model on. ## What's next for Traffic Eye We think we could make a better prediction model, as well as creating a weather resilient model to account for varying types of weather throughout the year. We think a prototype like this can be scaled and placed on actual roads given enough R&D is done. This definitely can help our cities advance with rising capabilities in Artificial Intelligence & Computer Vision!
winning
## Not All Backs are Packed: An Origin Story (Inspiration) A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality. ## The Got Your Back, Pack: **U N P A C K E D** (What's it made of) GPS Location services, 9000 mAH power battery, Solar charging, USB connectivity, Keypad security lock, Customizable RBG Led, Android/iOS Application integration, ## From Backed Up to Back Pack (How we built it) ## The Empire Strikes **Back**(packs) (Challenges we ran into) We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not. Also, making connection of hardware and app with the API was tricky. ## Something to Write **Back** Home To (Accomplishments that we're proud of) ## Packing for Next Time (Lessons Learned) ## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!") The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider.
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
## Inspiration: As a group of 4 people who met each other for the first time, we saw this event as an inspiring opportunity to learn new technology and face challenges that we were wholly unfamiliar with. Although intuitive when combined, every feature of this project was a distant puzzle piece of our minds that has been collaboratively brought together to create the puzzle you see today over the past three days. Our inspiration was not solely based upon relying on the minimum viable product; we strived to work on any creative idea sitting in the corner of our minds, anticipating its time to shine. As a result of this incredible yet elusive strategy, we were able to bring this idea to action and customize countless features in the most innovative and enabling ways possible. ## Purpose: This project involves almost every technology we could possibly work with - and even not work with! Per the previous work experience of Laurance and Ian in the drone sector, both from a commercial and a developer standpoint, our project’s principal axis revolved around drones and their limitations. We improved and implemented features that previously seemed to be the limitations of drones. Gesture control and speech recognition were the main features created, designed to empower users with the ability to seamlessly control the drone. Due to the high threshold commonly found within controllers, many people struggle to control drones properly in tight areas. This can result in physical, mental, material, and environmental damages which are harmful to the development of humans. Laurence was handling all the events at the back end by using web sockets, implementing gesture controllers, and adding speech-to-text commands. As another aspect of the project, we tried to add value to the drone by designing 3D-printed payload mounts using SolidWorks and paying increased attention to detail. It was essential for our measurements to be as exact as possible to reduce errors when 3D printing. The servo motors mount onto the payload mount and deploy the payload by moving its shaft. This innovation allows the drone to drop packages, just as we initially calculated in our 11th-grade physics classes. As using drones for mailing purposes was not our first intention, our main idea continuously evolved around building something even more mind-blowing - innovation! We did not stop! :D ## How We Built it? The prototype started in small but working pieces. Every person was working on something related to their interests and strengths to let their imaginations bloom. Kevin was working on programming with the DJI Tello SDK to integrate the decisions made by the API into actual drone movements. The vital software integration to make the drone work was tested and stabilized by Kevin. Additionally, he iteratively worked on designing the mount to perfectly fit onto the drone and helped out with hardware issues. Ian was responsible for setting up the camera streaming. He set up the MONA Server and broadcast the drone through an RTSP protocol to obtain photos. We had to code an iterative python script that automatically takes a screenshot every few seconds. Moreover, he worked toward making the board static until it received a Bluetooth signal from the laptop. At the next step, it activated the Servo motor and pump. But how does the drone know what it knows? The drone is able to recognize fire with almost 97% accuracy through deep learning. Paniz was responsible for training the CNN model for image classification between non-fire and fire pictures. The model has been registered and ready for use to receive data from the drone to detect fire. Challenges we ran into: There were many challenges that we faced and had to find a way around them in order to make the features work together as a system. Our most significant challenge was the lack of cross-compatibility between software, libraries, modules, and networks. As an example, Kevin had to find an alternative path to connect the drone to the laptop since the UDP network protocol was unresponsive. Moreover, he had to investigate gesture integration with drones during this first prototype testing. On the other hand, Ian struggled to connect the different sensors to the drone due to their heavy weight. Moreover, the hardware compatibility called for deep analysis and research since the source of error was unresolved. Laurence was responsible for bringing all the pieces together and integrating them through each feature individually. He was successful not only through his technical proficiencies but also through continuous integration - another main challenge that he resolved. Moreover, the connection between gesture movement and drone movement due to responsiveness was another main challenge that he faced. Data collection was another main challenge our team faced due to an insufficient amount of proper datasets for fire. Inadequate library and software versions and the incompatibility of virtual and local environments led us to migrate the project from local completion to cloud servers. ## Things we have learned: Almost every one of us had to work with at least one new technology such as the DJI SDK, New Senos Modulos, and Python packages. This project helped us to earn new skills in a short amount of time with a maximized focus on productivity :D As we ran into different challenges, we learned from our mistakes and tried to eliminate repetitive mistakes as much as possible, one after another. ## What is next for Fire Away? Although we weren't able to fully develop all of our ideas here are some future adventures we have planned for Fire Away : Scrubbing Twitter for user entries indicating a potential nearby fire. Using Cohere APIs for fluent user speech recognition Further develop and improve the deep learning algorithm to handle of variety of natural disasters
winning
# Workchain - The decentralized on-demand workforce PennApps XVIII ### Summary There are millions of people who are willing and seeking work but can’t find the opportunities. Thousands of tech companies want to access an on-demand workforce where individuals can perform simple human-intelligence tasks (HIT) for the companies, that computers are unable to do. The need for such a platform has been proven by the likes of Amazon Mechanical Turk, but it fails to enable those who are most in need of such opportunities. Workers on Turk must go through a Know Your Customer (KYC) process, and there are restrictions to non-US citizens. As a result, there are large barriers to low-tier/minority laborers that need access to these simple tasks where they could increase their daily earnings by several times. Our solution is a scalable, low-barrier, 24/7 platform that provides a two-fold solution to both workers and companies. Workchain is a *decentralized application* -- a crowdsourcing marketplace that enables workers to get paid for completing thousands of simple, easy-to-access human-intelligence tasks which companies ultimately integrate to advance their technology and products. Built on the EOS blockchain, the fastest, most powerful blockchain for building decentralized apps, Workchain allows there to be *no transaction fees*, rapid user onboarding, immutable data, and added security. Furthermore, *only* the company that requests data through the tasks own and have access to their data. Workers can work asynchronously, whenever convenient, choose to complete tasks based on interest and skill level and receive instant online payments for their contributions, wherever they are. Companies access an on-demand workforce and can easily find workers with specific qualifications. Workchain provides simpler and more secure solution where the transactions go directly from companies to workers and vice versa. Companies decide how much they pay the workers, and pay for only what they receive. Workchain ultimately allows workers and companies to provide and extract more value by empowering workers to find more employment, while providing companies greater access to human intelligence for technological advancement. ### Tech Specs We have deployed a smart contract on a private EOS blockchain network. All data related to requester/worker registration, task lists, and payments go on chain and can be pulled from the chain. This allows all data created on the platform to be immutable, and data is owned by respective accounts in RAM. For example, if a requester creates a task, all data about that task is *only* owned by the requester in his/her allocated RAM. Moreover, transactions are immediate on EOS-- it can host thousands of transactions per second, and goes from account to account. There is no need to go through a centralized institution such as banks. There are *no transaction fees* either-- which is a win-win for both workers and requesters. **RPC API Documentation:** The smart contract was deployed on the `workchaineos` account. All tables can be found on the `workchaineos` account. Names of important tables include: `requesters` - Companies that request tasks `workers` - People that are completing the tasks `tasks` - The tasks that are being put up on the platform `workglobal` - Holds some global variables important for the environment **Chain API Endpoint**: <http://wps-test.hkeos.com:8888/v1/chain/get_info> This returns basic information of the chain. **Get Account**: <http://wps-test.hkeos.com:8888/v1/chain/get_account> This gives information about a given account. Body params: `account_name` string **Get Table Rows**: <http://wps-test.hkeos.com:8888/v1/chain/get_table_rows> This is where we pull all of the data for the web platform. Payload would be a JSON like this for our specific chain: `{ "code": "workchaineos", "scope": "workchaineos", "table": "tasks", "lower_bound": 0, "upper_bound": 2, "json": "true", "limit": 10 }` This would return the first two entries in the `tasks` table. **Get Currency Balance**: <http://wps-test.hkeos.com:8888/v1/chain/get_currency_balance> This returns the amount of money in a certain account. Payload would be a JSON like this for our specific chain: `{ "code": "workchaineos", "account": "requesterone", "symbol": "EOS", }` Sending a request with this payload would return the amount of money in `requesterone`'s account. **How to pull data from the RPC API in Javascript** Here is an example of how to pull data from the RPC API endpoint using Javascript. ``` var data = JSON.stringify({ "code": "workchaineos", "scope": "workchaineos", "table": "tasks", "lower_bound": 0, "upper_bound": 2, "json": "true", "limit": 10 }); var xhr = new XMLHttpRequest(); xhr.withCredentials = true; xhr.addEventListener("readystatechange", function () { if (this.readyState === this.DONE) { console.log(this.responseText); } }); xhr.open("POST", "wps-test.hkeos.com:8888/v1/chain/get_table_rows"); xhr.send(data); ```
# Bull Rider ## Inspiration In the rapidly evolving world of blockchain and cryptocurrency, many users find themselves overwhelmed by the complexity of managing digital assets. We were inspired to create **Bull Rider** after observing the steep learning curve faced by newcomers to the Sui blockchain ecosystem. Our goal was to develop an intuitive, voice-controlled assistant that simplifies blockchain interactions and makes cryptocurrency management accessible to everyone. ## What It Does **Bull Rider** is an innovative voice-controlled assistant designed to simplify interactions with the Sui blockchain. Key features include: * **Voice-activated tutorials**: Users can ask questions about Sui wallet operations, and Bull Rider provides step-by-step audio guidance, complemented by on-screen instructions. * **Voice-controlled transactions**: Users can initiate cryptocurrency transfers using natural language commands, making sending tokens as easy as speaking to a friend. * **Context-aware assistance**: Bull Rider uses RAG (Retrieval-Augmented Generation) to provide accurate, up-to-date information about Sui wallet operations. * **Dynamic tutorial generation**: The assistant analyzes the user's screen and query to create personalized, context-specific tutorials. * **Seamless integration**: Bull Rider operates as a menu bar application, always ready to assist without interrupting the user's workflow. ## How We Built It Bull Rider is built using a combination of cutting-edge technologies: * **Frontend**: Python with rumps for the macOS menu bar interface. * **Backend**: FastAPI for the REST API. * **Natural Language Processing**: Groq API for parsing voice commands and generating responses. * **Speech-to-Text and Text-to-Speech**: Deepgram API for accurate transcription and natural-sounding speech synthesis. * **Image Analysis**: Hyperbolic API for screen capture analysis and tutorial generation. * **Database**: SQLite for lightweight, serverless data storage. * **RAG System**: Sentence Transformers and FAISS for efficient information retrieval. * **Blockchain Integration**: Custom Sui blockchain client for executing transactions. ## Challenges We Ran Into * Integrating multiple AI services (Groq, Deepgram, Hyperbolic) seamlessly. * Implementing an efficient RAG system for context-aware responses. * Ensuring accurate voice command parsing for blockchain transactions. * Optimizing the tutorial generation process for real-time responsiveness. * Balancing between providing detailed guidance and maintaining simplicity in user interactions. ## Accomplishments That We're Proud Of * Creating a voice-controlled assistant that simplifies complex blockchain operations. * Successfully implementing a RAG system for providing accurate, context-aware information. * Developing a dynamic tutorial generation system that adapts to the user's screen and query. * Integrating multiple AI services to create a seamless, intelligent user experience. * Building a non-intrusive, always-available assistant as a menu bar application. ## What We Learned * The importance of context in AI-generated responses for blockchain applications. * Techniques for efficient information retrieval and embedding in RAG systems. * Strategies for integrating multiple AI services into a cohesive application. * The complexities of voice-controlled interfaces for financial transactions. * The potential of AI to simplify complex technological interactions. ## What's Next for Bull Rider 1. Expanding support for multiple blockchain ecosystems beyond Sui. 2. Implementing more advanced voice authentication for enhanced security. 3. Developing a mobile version of the assistant for on-the-go blockchain management. 4. Integrating real-time market data and portfolio management features. 5. Collaborating with blockchain projects to provide tailored assistance for specific dApps and services. 6. Implementing a feedback loop to continuously improve the RAG system and tutorial generation. **Bull Rider** represents a significant step towards making blockchain technology accessible to everyone. By combining voice control, AI-driven assistance, and intuitive design, we're paving the way for wider adoption of cryptocurrency and decentralized technologies.
## Inspiration Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.** ## What it does This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash. ## How we built it For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work. The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies. ## Challenges we ran into In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer. ## Accomplishments that we're proud of Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves. Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom) Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom) ## What we learned We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app. ## What's next for Job-Dash In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on.
losing
## Inspiration We've all had technical interviews to prepare for. It's tough even on the best of days. In the end, success comes from consistency. ## What it does With Tavern, you don't have to go at it alone. Algorithm challenges are now more like your favourite roleplaying game than interview preparation. Each day your guild unlocks up to 6 algorithm challenges to solve while competing for the top spots on a global leaderboard and opportunities to apply for exclusive job postings! ## How we built it We knew we'd be pressed for time during this hackathon. It always comes down to the wire. With only 24 hours to get something off the ground, we utilized Redwood JS to help us move fast. With Redwood we wouldn't have to worry about tedious boilerplate or configuration, allowing us to get right to the heart of the product. ## Challenges we ran into From the start we wanted to have a rich character creator to connect with your inner roleplaying gamer. We chose SVG for layers of the character creation tool knowing that it would allow us an opportunity to scale our images and keep them crisp. The SVGs brought along a bunch of challenges we didn't expect, so we ended up going with a simpler random character generator in the end. ![messed up character](https://cdn.discordapp.com/attachments/794419266052161566/797688591606218792/Screen_Shot_2021-01-09_at_8.49.32_PM.png) ## Accomplishments that we're proud of Four adventurers set out on a mission for 24 cold hours during the dawn of the year 2021. They had no idea what dangers they would face but together their bond grew stronger. Let this be a lesson to any evil wizards lurking in the realm of Tavern! We are happy we survived! We spent the last 24 hours coding, drinking energy drinks, coding, and not sleeping. It was a blast and we look forward to next year! ![problem solving page](https://cdn.discordapp.com/attachments/794419266052161566/797920267216093214/Screen_Shot_2021-01-10_at_12.10.12_PM.png) ## What we learned * Working with SVGs is hard * Estimates are hard * It's always a good thing to under-scope and over-deliver * Energy drinks work wonders ## What's next for Tavern Who knows?! We love the product and love solving algorithms together. Maybe the next time you prepare for interviews you'll be competing against a global leaderboard as a half-elf wizard!
## Inspiration Initially, we struggled to find a project idea. After circling through dozens of ideas and the occasional hacker's block, we were still faced with a huge ***blank space***. In the midst of all our confusion, it hit us that this feeling of desperation and anguish is familiar to all thinkers and creators. There came our inspiration - the search for inspiration. Tailor is a tool that enables artists to overcome their mental blocks in a fun and engaging manner, while leveraging AI technology. AI is very powerful, but finding the right prompt can sometimes be tricky, especially for children or those with special needs. With our easy to use app, anyone can find inspiration as swiftly as possible. ## What it does The site generates prompts artists to generate creative prompts for DALLE. By clicking the "add" button, a react component containing a random noun is added to the main container. Users can then specify the color and size of this noun. They can add as many nouns as they want, then specify the style and location of the final artwork. After hitting submit, a prompt is generated and sent to OpenAI's API, which returns an image. ## How we built it It was built using React Remix, OpenAI's API, and a random noun generator API. TailwindCSS was used for styling which made it easy to create beautiful components. ## Challenges we ran into Getting tailwind installed, and installing dependencies in general. Sometimes our API wouldn't connect, and OpenAI rotated our keys since we were developing together. Even with tailwind, it was sometimes hard for the CSS to do what we wanted it to. Passing around functions and state between parent and child components in react was also difficult. We tried to integrate twiliio with an API call but it wouldn't work, so we had to setup a separate backend on Vercel and manually paste the image link and phone number. Also, we learned Remix can't use react-speech libraries so that was annoying. ## Accomplishments that we're proud of * Great UI/UX! * Connecting to the OpenAI Dalle API * Coming up with a cool domain name * Sleeping more than 2 hours this weekend ## What we learned We weren't really familiar with React as none of use had really used it before this hackathon. We really wanted to up our frontend skills and selected Remix, a metaframework based on React, to do multipage routing. It turned out to be a little overkill, but we learned a lot and are thankful to mentors. They showed us how to avoid overuse of Hooks, troubleshoot API connection problems, and use asynchrous functions. We also learned many more tailwindcss classes and how to use gradients. ## What's next for Tailor It would be cool to have this website as a browser extension, maybe just to make it more accessible, or even to have it scrape websites for AI prompts. Also, it would be nice to implement speech to text, maybe through AssemblyAI
## Inspiration ✨ Seeing friends' lives being ruined through **unhealthy** attachment to video games. Struggling with regulating your emotions properly is one of the **biggest** negative effects of video games. ## What it does 🍎 YourHP is a webapp/discord bot designed to improve the mental health of gamers. By using ML and AI, when specific emotion spikes are detected, voice recordings are queued *accordingly*. When the sensor detects anger, calming reassurance is played. When happy, encouragement is given, to keep it up, etc. The discord bot is an additional fun feature that sends messages with the same intention to improve mental health. It sends advice, motivation, and gifs when commands are sent by users. ## How we built it 🔧 Our entire web app is made using Javascript, CSS, and HTML. For our facial emotion detection, we used a javascript library built using TensorFlow API called FaceApi.js. Emotions are detected by what patterns can be found on the face such as eyebrow direction, mouth shape, and head tilt. We used their probability value to determine the emotional level and played voice lines accordingly. The timer is a simple build that alerts users when they should take breaks from gaming and sends sound clips when the timer is up. It uses Javascript, CSS, and HTML. ## Challenges we ran into 🚧 Capturing images in JavaScript, making the discord bot, and hosting on GitHub pages, were all challenges we faced. We were constantly thinking of more ideas as we built our original project which led us to face time limitations and was not able to produce some of the more unique features to our webapp. This project was also difficult as we were fairly new to a lot of the tools we used. Before this Hackathon, we didn't know much about tensorflow, domain names, and discord bots. ## Accomplishments that we're proud of 🏆 We're proud to have finished this product to the best of our abilities. We were able to make the most out of our circumstances and adapt to our skills when obstacles were faced. Despite being sleep deprived, we still managed to persevere and almost kept up with our planned schedule. ## What we learned 🧠 We learned many priceless lessons, about new technology, teamwork, and dedication. Most importantly, we learned about failure, and redirection. Throughout the last few days, we were humbled and pushed to our limits. Many of our ideas were ambitious and unsuccessful, allowing us to be redirected into new doors and opening our minds to other possibilities that worked better. ## Future ⏭️ YourHP will continue to develop on our search for a new way to combat mental health caused by video games. Technological improvements to our systems such as speech to text can also greatly raise the efficiency of our product and towards reaching our goals!
partial
## Inspiration We got inspired by Flash Sonar (a mathod that helps peopla to see via sound waves) but it took months to learn so We developed Techno Sonar a guide for blind people. ## What it does Uses ultrasonic sensors to detect objects and inform the user based on the size and distance of the object If object is high it informs the user via sound but if it is a low object informs with vibration that is send behind the calves. User can personalize the range of the sensors and sound level via mobile application. ## How we built it We used arduino circuits because it is cheaper, common and easier to use so the price is low. We used dart language to create a cross platform mobile application. ## Challenges we ran into We thought *How can a blind person use mobile application?* so I added AI. voice assistant that is making the user be able to control application via talking. ## Accomplishments that we're proud of We developed a mobile application with it's own voice assistant and we made the product better and also cheaper compared to the old versions. We designed it compitable with any clothes so it can be integrated to any clothe. It is easy to use and maden by commom pieces so it is cheap. ## What we learned We learned a lot about sound processing systems and gained experience about coding. ## What's next for Techno Sonar Techno Sonar will be attended to a lots of competitions to get what it deserve that way at one day it will be helpfull to plenty of blind people.
Got tech is-shoes?(Issues) we solve them for you : P ## Inspiration Recognizing the shortage of advanced technology for individuals with visual impairments, we want to create a project that will benefit this community and create more inclusive and user-friendly innovations. We noticed that the more advanced tech for visually impaired individuals included sensors that weren't self-sustainable. We wanted to set the bar higher so that those individuals wouldn't have to recharge their aids to navigate the world. ## What it does Through each stride, piezoelectric crystals transfer mechanical stress energy to electricity, powering a complete circuit of ultrasonic sensors, Arduino board, and Bluetooth components. The ultrasonic sensors will track the distance of the closest obstacle in our walking direction and update it continuously in the app through a Bluetooth connection. Suppose the obstacle distance to the user is under a certain threshold. In that case, the app will send out warnings with different levels to notify them. ## How we built it Our working process is divided into three steps: - Setting up the circuit - Building our React Native app - Connecting the hardware and software We joined six pieces of piezoelectric crystals into three pairs with cartons and styrofoam in between. The piezoelectric crystals were connected in series with an Arduino Nano Board, a pair of ultrasonic distance sensors, and an HC05 Bluetooth module. The circuit functionality was first tested with LED light. After testing, we glued the crystals on the sole of the shoes, with the ultrasonic on the head of the shoes. Data collected will be transferred to serial ports through Bluetooth. We approached the app with a minimalistic UI and focused more on the audio aspects for our targeted audience. The app consists of a landing page welcoming users to the application. Once they start the journey, the app will move right into the detecting distance mode. Warnings are given out in both audio and text form. ## Challenges we ran into The biggest challenge we've faced were working with limited resources, including hardware components and time constraint. However, we've managed to finish on time. ## Accomplishments that we're proud of We have built a fully functional energy-harvesting circuits that can sustains itself!!! -better the status quo of people with visual impairements ## What we learned * Soldering, converting AC to DC power for IIoT by building a bridge rectifier for the first time ! ## What's next for Step-ergize * More distance sensors * Emergency location sharing feature (voice-command)
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
losing
## Inspiration RL solutions presented us with a problem: how might we encourage patients to maintain diet and nutrition plans given by their healthcare provider? Our group realized that having a way to track your progress is vital to staying healthy. ## What it does #EatingGoals provides the doctor and user a platform to keep track of calorie intake in an easy-to-manage way that encourages the user to continue with their plan. The doctor can input a specific calorie goal for each of their patients. Then, once the user eats something, they can enter the food into the website to get its calorie count. This data is stored so the user can view if they are meeting their goal or not. Consistently meeting goals will reward the user with medals, to encourage continuation. ## How we built it When the user inputs their food item, the website uses an API from Nutritionix to get the calorie count and other useful information about that food. The website itself is built using html5, javascript, jquery, and CSS. ## Challenges we ran into Initially, this was supposed to be an android app. However, none of us had experience with Android Studio so we weren't able to get anywhere, thus resulting in us changing platforms to a webapp. Getting the CSS to look nice and line up properly was a challenge for us, as it took a lot of time to get right. We initially had the idea to use voice recognition or a camera to input foods, but did not have nearly enough time to get it working. ## Accomplishments that we're proud of We managed to get the API from Nutritionix working, which was a huge win for us. Implementing our ideas and having it work was very satisfying for us as well. ## What we learned We have all became more experienced in HTML and CSS. We learned that we should watch the scale of the project we take on. ## What's next for #EatingGoals With more time, this webapp can be expanded easily to have way more features or improve what we currently have. Our idea of voice recognition or using the camera can be added in to make it easier to input foods for people with disabilities.
## Inspiration My inspiring was really just my daily life. I use apps like my fitness pal and libre link to keep track of all these things but I feel like it doesn't paint the whole picture for someone with my condition. I'm really making this app for people like myself who have a challenging time dealing with diabetes and simplifying an aspect of their lives' at least a little bit. ## What it does The mobile app keeps track of your blood sugars for the day by either taking data from your wearable sensor or finger pricks and puts it side by side with the exercise that you've done that day and the food you've ate. It allows you to clearly quantify and compare how much you eat, exercise and take insulin individually and it also helps you clearly see the relationship those three activities have with each other. The My Fitness Pal API does a really good job of tracking macro nutrients in each meal and making it easier the more you use the API. ## How we built it I built the app using React native and it was my first time using it. I plan to integrate the my fitness pal API for the fitness and meals portion of the app and the terra API to get sensor data as well as the choice to manually update a csv file of your glucose logs that most glucometers come with. ## Challenges we ran into ## Accomplishments that we're proud of Creating something meaningful to myself and other people using a passion/skill that is also meaningful to me. ## What we learned I learned a lot about how to organize my files while making a large project, companies are very stingy when it comes to healthcare related APIs and how to actually create a cross platform mobile app. ## What's next for Diathletic I plan to make the app fully functional because right now there is a lot of dummy data. I wish to be able to use this app in my everyday life because its great to actually see the effects of something great that you have completed. I also REALLY hope that one stranger finds any sort of value in the software that I created.
## Inspiration Social media today is focused on connecting users who share similar friend circles and interests. The recommendations that show up on users' feeds are geared towards their current beliefs, leading to increasingly polarized iterations of the information in the hopes of keeping them online for extended periods of time. This way of connecting users results in "echo chambers" of experiences, ideologies, and culture. Common Grounds is fundamentally built with the purpose of connecting users who might not share the traits that traditional social media looks for in potential connections, but who could still be good friends given the opportunity. ## Additional Background Info Traditional social media is designed around the idea of a centralized network. Users accrue an increasing number of followers and likes. In recent years this has led to the rise of influencers, people who amass large followings on social media and thus have a disproportionate influence on others despite their knowledge and credibility on any given topic. This means that a retweet or post by a prominent figure on a supposed rumor could lead to its rapid spread into a common misperception. Common Grounds is aimed at building an egalitarian network of connections where one accrues "friends" not because they already have a large following or because they're a model, but because of the merit of their ideas. Further, because there are no followers or likes, users can focus on building meaningful connections in a stress-free environment. ## What it does A social platform that uses OpenAI’s GPT-3 language prediction model to generate prompts designed to **spark conversation**, and to form connections between people with seemingly **differing** opinions. * ML-generated questions and follow-up prompts * Smart matching to pair users with differing opinions * Video calling + option to mute/unmute * Option to add & remove friends * Dashboard to view weekly stats Because there's no search feature for friends, no publicly-viewable number of followers, and therefore an absence of influencers, users build authentic relationships in an environment where there isn't pressure to increase their numbers of followers or likes. ## How we built it Common Grounds is composed of two main components: a React frontend and a Python backend server. On the frontend, we use Firebase Auth for login, Twilio Video for video calling, and WebSockets for live, bidirectional client-server communication. Our frontend uses the NextJS React framework and is deployed to Vercel. On the backend, we used the AIOHTTP Python library to serve HTTP and Websocket requests, Firestore for data persistence, Twilio Video for video calling, and OpenAI GPT-3 for intelligent discussion prompt generation. Our backend is deployed to Azure web apps. ## Challenges we ran into * Designing a cohesive user experience * Deploying the backend server and setting up SSL * Complex state management and WebSocket connection issues on the frontend ## What's next for Common Grounds * Closed Captioning: for increased accessibility for those who may be deaf or hearing-impaired could be extended to live language translation to increase diversity of users * Direct Messaging: allow users to message their connections, to plan times to continue their conversations * More Sophisticated Matching & Prompts: over time, learn what type of matches yield the most meaningful discussion based on statistics such as duration of call and friend rate
losing
## Inspiration Social media has quickly become the primary source of news for many around the world and the prevalence of fake news has grown with it. With a range of recent natural disasters and the spread of Coronavirus(COVID-19) increasing the stakes of acting on all available information, more and more individuals are beginning to fall victim to lies spread by fake news. Being an interesting issue to address, this led us towards discovering a market gap for a tool to aggregate, filter, and visualize global social media activity over a period of time. Our project, Re:Action, aims to empower users by providing them access to an aggregated and formatted set of tweets from around the world. This allows them to become aware of inconsistencies characteristic of fake news stories in relation to what the larger majority reports, allowing fully informed before they take action on the ever-rapidly changing situation of our world. ## What it does Users are able to query keywords and visualize any relevant tweets according to their geographical locations around the world. The keyword is used as a search parameter for both News Article and Tweet scrapers, where a large collection of relevant elements of information is captured. Using type meta-data, geo-location and element information is extracted and then subsequently displayed in the corresponding location on the web app's world map. ## How I built it We began by planning and wireframing our project using Figma. For the front-end, we used React and various custom-built javascript libraries to display hotspots and clusters of hits around the world. For the back-end, we used Python & [Tweepy/GetOldTweets3] to implement our web scrapers and data processing. After serving our multi-threaded scripts with Flask, all relevant information was stored in a MongoDB Atlas database, where it was then called upon and displayed in my residence. ## Challenges I ran into The official Twitter Search API that we used severely limited the number of calls we could make (maximum 300 queries for 18 requests every 15 minutes), which made obtaining a large enough data set to train our machine learning model difficult. Many of the tweets included incomplete or improperly formatted location data, which made the visualization/plotting process of the set of elements difficult. This forced us to rely on other methods in identifying a suitable map location for the elements collected. ## Accomplishments that I'm proud of Some of the libraries featured in the front end of our web app were made from scratch.
## Inspiration: The app was born from the need to respond to global crises like the ongoing wars in Palestine, Ukraine, and Myanmar. Which have made the importance of real-time, location-based threat awareness more critical than ever. While these conflicts are often headline news, people living far from the conflict zones may lack the immediate understanding of how quickly conditions change on the ground. Our inspiration came from a desire to bridge that gap by leveraging technology to provide a solution that could offer real-time updates about dangerous areas, not just in warzones but in urban centers and conflict-prone regions around the world. ## How we built it: Our app was developed with scalability and responsiveness in mind, given the complexity of gathering real-time data from diverse sources. For the backend, we used Python to run a Reflex web app, which hosts our API endpoints and powers the data pipeline. Reflex was chosen for its ability to handle asynchronous tasks, crucial for integrating with a MongoDB database that stores a large volume of data gathered from news articles. This architecture allows us to scrape, store, and process incoming data efficiently without compromising performance. On the frontend, we leveraged React Native to ensure cross-platform compatibility, offering users a seamless experience on both iOS and Android devices. React Native's flexibility allowed us to build a responsive interface where users can interact with the heat map, see threat levels, and access detailed news summaries all within the same app. We also integrated Meta LLaMA, a hyperbolic transformer model, which processes the textual data we scrape from news articles. The model is designed to analyze and assess the threat level of each news piece, outputting both the geographical coordinates and a risk assessment score. This was a particularly complex part of the development process, as fine-tuning the model to provide reliable, context-aware predictions required significant iteration and testing. ## Challenges we faced: The most pressing challenge was data scraping, particularly the obstacles put in place by websites that actively work to prevent scraping. Many news websites have anti-scraping measures in place, making it difficult to gather comprehensive data. To address this, we had to get creative with our scraping methods, using dynamic techniques that could mimic human-like browsing to avoid detection. Another major challenge was iOS integration, particularly in working with location services. iOS tends to have stricter privacy controls, which required us to implement complex authentication mechanisms and permissions handling. Additionally, deploying the backend infrastructure presented challenges in ensuring that it scaled smoothly under heavy data loads, all while maintaining low-latency responses for real-time updates. We also faced hurdles in speech-to-text functionality, as we aim to make the app more accessible by allowing users to interact with it via voice commands. Integrating accurate, multi-language speech recognition that can handle diverse accents and conditions in real-world environments is a work in progress. ## Accomplishments we're proud of: Despite these challenges, we successfully built a dynamic heat map that allows users to visually grasp the intensity of threats in different geographical areas. The Meta LLaMA model was another major achievement, enabling us to not only scrape news articles but also analyze and assign a threat level in real time. This means that a user can look at the app, see a particular area highlighted as high risk, and read news reports with data-backed assessments. We've created something that helps people stay informed about their environment in a practical, visually intuitive way. Moreover, building a fully functional app with both backend and frontend integration, while using cutting-edge machine learning models for threat assessment, is something we're particularly proud of. The app is capable of processing large datasets and serving actionable insights with minimal delays, which is no small feat given the technical complexity involved. ## What we learned: One of the biggest takeaways from this project was the importance of starting with the fundamentals and building a solid foundation before adding complex features. In the early stages, we focused on getting the core infrastructure right—ensuring the scraping, data pipeline, and database were robust enough to handle scaling before moving on to model integration and feature expansion. This allowed us to pivot more easily when challenges arose, such as working with real-time data or adjusting to API limitations. We also learned a great deal about the nuances of natural language processing and machine learning, especially when it comes to applying those technologies to dynamic, unstructured news data. It’s one thing to build an AI model that processes text in a controlled environment, but real-world data is messy, often incomplete, and constantly evolving. Understanding how to fine-tune models like Meta LLaMA to give reliable assessments on current events was both challenging and incredibly rewarding. ## What’s next: Looking ahead, we plan to expand the app’s capabilities further by integrating speech-to-text functionality. This will make the app more accessible, allowing users to dictate queries or receive voice-based updates on emerging threats without having to type or navigate through screens. This feature will be particularly valuable for users who may be on the move or in situations where typing isn’t practical. We’re also focusing on improving the accuracy and scope of our web scrapers, aiming to gather more diverse data from a broader range of news sources while adhering to ethical guidelines. This includes exploring ways to improve scraping from difficult sites and even partnering with news outlets to gain access to structured data. Beyond these immediate goals, we see potential in scaling the app to include predictive analytics, using historical data to forecast potential danger zones before they escalate. This would help users not only react to current events but also plan ahead based on emerging patterns in conflict areas. Another exciting direction is user-driven content, allowing people to report and share information about dangerous areas directly through the app, further enriching the data landscape.
## Inspiration Utilize our passion for mobile applications and UI/UX to present information in an easy to understand way. ## What it does Queries Vitech's API and generates graph based off result. ## How we built it Caffeine. Swift. iOS. Network calls. Charts.io library. Throw in some dictionaries and arrays. Sprinkle teamwork on top. ## Challenges we ran into Network timeout, API maintenance. ## Accomplishments that we're proud of 3 people who have never met before were able to successfully work together and churn out a highly functional app. ## What we learned Teamwork = Major Key. ## What's next for YHack2016
losing
## Inspiration Old fashion scrapbooks where you are able to gather photos, record your feelings from that moment, and decorate with a fun theme! ## What it does Allows you to record moments in your life and view them in a chronological timeline. ## How we built it We used Kintone for our backend, React + Vite as our frontend framework, ChakraUI and hand-drawn pixel art for styling + design, auth0 for authentication, and the Cohere API to do semantic analysis on event descriptions and generate sentimental sayings about datebooking for the login page. ## Challenges we ran into The challenges that we ran into were plentiful; some included setting up Kintone and making the elements on the web app aesthetically pleasing (we are not your go-to UI/UX devs). ## Accomplishments that we're proud of We are proud of being able to complete a working MVP! ## What we learned We learned how to work around the CORS limitations of the Kintone API and how to host a static site using .tech domains with GitHub Pages. ## What's next for Datebook Some immediate next features would be to enable editing event entries and supporting multiple datebooks/timelines!
# Healora ## Background **Why should nurses and doctors have to rely solely on manual inputs when modern healthcare technology has so much more potential?** In hospitals and healthcare settings, **medical professionals are often overwhelmed**, managing complex workflows, multiple patients, and endless data inputs, often missing key details that could improve patient care. But what if **technology could alleviate this burden**, helping healthcare workers make quicker, more accurate decisions? In today’s world, we have **empathetic AI, predictive models**, and **vast amounts of data** at our disposal. Yet, many hospital systems continue to rely on **outdated methods** for managing symptoms, diagnoses, and patient interactions. **Healthcare workers lose valuable time** manually entering and interpreting data when **AI could be working alongside them**. **Healora was created to bridge that gap**. Empowering medical staff with **AI-driven tools**, Helora leverages **Nurse Joy**, a virtual assistant, to intelligently **track symptoms**, perform **predictive analysis**, and provide **emotional understanding** in patient interactions. With Helora, hospitals can **streamline workflows**, allowing nurses and doctors to focus more on what truly matters—**saving lives and improving patient outcomes**. ![image](https://github.com/user-attachments/assets/740d714d-f020-4743-9f51-25ea500a2408) ## What is Healora? Healora is an AI-powered healthcare platform designed to streamline patient care by integrating real-time health monitoring, virtual assistance, and efficient data management for healthcare providers. 1. The pre-screening page allows patients to enter their initial health data and symptoms, which helps healthcare providers assess their condition quickly before moving forward with more detailed examinations. ![image](https://github.com/user-attachments/assets/d9a38dbd-0ef4-44c3-8806-24692db0a2db) 2. The treatment page is where Nurse Joy, Healora's AI assistant, interacts with patients. This page also displays vital signs, such as respiratory rate, blood pressure, and heart rate, allowing medical staff to monitor patient health in real-time. ![image](https://github.com/user-attachments/assets/eec45b75-70cf-4ab3-81ea-4727d2feb651) 3. The patient list provides healthcare professionals with an overview of all patients currently under care, including their basic health details and treatment status. ![image](https://github.com/user-attachments/assets/220c5667-9c20-4c53-9cf8-0ca0fff03ed2) 4. The staff table offers a detailed view of the healthcare team, enabling easy management of roles and responsibilities within the medical staff. ![image](https://github.com/user-attachments/assets/cc24c6a6-f2cf-40ac-b9bf-73257b0588c3) ## Features * **Symptom Tracking**: Nurse Joy, Helora's virtual assistant, allows patients to log symptoms efficiently. These symptoms are recorded and analyzed to help healthcare workers make data-driven decisions. * **Predictive Analysis** Helora uses AI models to perform predictive analysis on the collected data, offering potential diagnoses and next steps for treatment based on the symptoms entered. * **Agentic Backend** Helora uses AI agents for majority of its backend tasks, powered by Fetch.AI * **Empathetic AI**: Helora incorporates emotional intelligence to provide compassionate, context-aware responses, ensuring that patients feel understood and supported during their care, powered by Hume AI * **AI-Generated Feedback**: Nurse Joy provides real-time feedbackafter each interaction, helping healthcare professionals review patient information, track symptoms, and make informed decisions quickly. * **Real-Time Data Analysis** Helora processes patient data in real-time, providing medical staff with up-to-date information on patient vitals and symptoms. * **Voice** Interaction Helora supports voice input, giving patients flexible communication options based on their preferences. ## Planning We began by thoroughly researching the current state of hospital workflows, particularly focusing on how AI could alleviate the pressures faced by healthcare workers. From understanding existing tools to studying user needs, our research laid the foundation for building Helora. Once we had a clear understanding of the problem space, we created user personas and developed user flows to guide our design. Using Figma, we designed prototypes to rapidly iterate and refine the user experience. Our focus was on creating a system that is intuitive for both healthcare workers and patients, with a clean, user-friendly interface that reflects the needs of medical professionals. ![image](https://github.com/user-attachments/assets/b34917e6-f38f-4ce0-8895-d0250b9bea22) ## System Architecture ![image](https://github.com/user-attachments/assets/457bb7f2-bfd7-4809-bda5-1357e1cd26c0) ### Frontend **Helora** is a responsive desktop and mobile-friendly web application built using Next.js, with Tailwind CSS for streamlined and highly customizable styling. We use shadcn components to provide a cohesive design system and Framer Motion to add smooth, engaging animations for improved user experience. Our frontend efficiently manages real-time communication between healthcare professionals and Nurse Joy, ensuring seamless interaction. Data flows between users and the backend are handled securely, providing healthcare workers with immediate feedback from the AI and allowing for effortless tracking of symptoms, analysis, and patient interactions. ### Backend Our project's backend is a robust and scalable architecture designed to deliver advanced AI capabilities. Our backend employs **Fetch AI agents** orchestrated through **FastAPI** to manage workflows and interactions efficiently. We use **Supabase with PostgreSQL** for real-time data storage and management. For language-based analysis, we have integrated **Llama** and **Mixtral** models via **Groq**, while **Hume AI** powers our conversational tasks with advanced sentiment and emotional analysis. ![image](https://github.com/user-attachments/assets/a570e5ab-691c-4da4-8a6d-3a511330e01d) *The technologies that we used to power Healora.* ## To run the project Clone the repository at [Healora](https://github.com/Zalinto/med-ai) Create a **.env** file, place it in the root directory and fill in with api keys of your configuration: ``` NEXT_PUBLIC_SUPABASE_URL= NEXT_PUBLIC_SUPABASE_KEY= SESSION_SECRET= SESSION_EXPIRES_IN= NEXT_PUBLIC_DEVELOPMENT= NEXT_PUBLIC_BASE_LOCAL_URL= NEXT_PUBLIC_BASE_PROD_URL= HUME_API_KEY= HUME_API_SECRET= HUME_CONFIG_ID= NEXT_PUBLIC_HUME_CONFIG_ID_SUK= HUME_API_KEY_SUK= HUME_SECRET_KEY_SUK= ``` Run the following commands(terminal) in the root directory: ``` npm install npm run dev ``` ## Use cases ![image](https://github.com/user-attachments/assets/e3bfdf7f-fbfb-4439-a364-b8b95eb09f8c) ## Takeaways ### What we learned Through building Healora, we discovered the complexities of integrating AI into healthcare workflows. We learned how crucial it is to blend advanced technology with empathy to enhance patient care. We also deepened our understanding of AI agents and language models, realizing their potential to make a real difference in medical workflow management and Nurse Assisstant settings ### Accomplishments We’re incredibly proud of what we achieved during this hackathon. Not only did we integrate a wide range of features like symptom tracking, AI-powered predictive analysis, and real-time empathetic feedback, but we also managed to design a clean, user-friendly interface tailored for healthcare professionals. This project also gave us the opportunity to apply research in meaningful ways by reading research papers and using user personas to ensure Helora meets the specific needs of both nurses and patients. On the design side, we’re proud of the consistent and intuitive UI built using Tailwind CSS and Framer Motion for smooth interactions. Helora’s design language prioritizes clarity and accessibility, catering to both healthcare workers and patients alike. ### What's Next Moving forward, we plan to focus on scaling Helora’s AI capabilities. While we’ve implemented empathetic AI and predictive analysis, we see room for improvement in terms of diagnosis accuracy and real-time feedback. Additionally, we aim to expand Helora’s capabilities by incorporating more comprehensive datasets to improve symptom recognition and predictive healthcare outcomes. We also want to refine Helora’s security and privacy features, ensuring patient data is handled with the utmost care and confidentiality. Our next steps include conducting more user testing, enhancing multilingual support, and refining Helora's user experience for different healthcare roles (nurses, doctors, HR personnel). Helora’s journey is just beginning, and we believe it has the potential to revolutionize how healthcare professionals interact with patients, improving both workflow efficiency and patient care.
## Inspiration Our friend Chris is a pretty epic guitarist. So we made C.H.R.I.S. to help him and guitar players like him practice and play better. ## What it does C.H.R.I.S. takes in a guitar tab (as a .txt file) and BPM (integer value) specified by the user, and presents the tab to them as they play through it. The user is then scored on their overall accuracy through the song. It is currently still a prototype. ## How we built it We used Python to develop C.H.R.I.S. ## Challenges we ran into * The greatest challenge that we faced was figuring out how to turn audio files into a frequency that we could then represent as a 'note' on the guitar. * We also struggled with determining how to score and evaluate users; namely, should we record the user's notes in tab format and compare the two files at the end, or should we evaluate the user's performance as they go through? We ended up selecting the latter option. ## Accomplishments that we're proud of We're super proud of our 1 A.M. breakthrough regarding how to transduce live audio signals into frequency values. ## What we learned We're all first-time hackers with varying levels of programming knowledge, so HackWestern was a new experience for all of us. One specific skill that we learned was working with Python libraries (matplotlib, scipy, etc.), ## What's next for C.H.R.I.S. (Cycling Human Readable Instrumental Score) * **Personalized Feedback**: We hope to score users as they use C.H.R.I.S., so that they can receive specific feedback on the sections of the song that they performed more poorly on. * **Further Gamification**: To gamify C.H.R.I.S., we would allow users to track their progress over time for each song, and provide rewards as users engaged with C.H.R.I.S. and improved their scores. * **Improving Note Recognition**: The current version of C.H.R.I.S. uses a relatively unsophisticated method of recognizing notes. A future update would fine-tune this process to make C.H.R.I.S. better at recognizing chords and other multi-note combinations.
partial
## Inspiration Every day, more than 130 people in the United States alone die from an overdose of opioids, which includes prescription painkillers and other addictive drugs. The rate of opioid abuse has been rising steadily since the 1990s and has developed into a serious public health problem since. Roughly 21-29 percent of patients prescribed painkillers end up misusing them, and 4-6 percent of those transition to heroin. Not all of this misuse is intentional, some people simply forget when they took their pills or how many to take and end up hurting themselves by accident. Additionally, due to the addictive nature of the drugs and the chronic pain they seek to solve, many people take more than necessary due to a dependency on them. The U.S. Department of Health and Human Services has taken steps to help this crisis, but by and large, misuse of opioids is still a rapidly growing problem in the United States. Project Bedrock seeks to help solve some of these problems. Our team was inspired by previous hackathon-built automated pill dispensers, but we wanted to take it a step further with a tamper-proof system. Our capsule is pressurized, so upon anyone breaking through to access their pills at the wrong time, a change in pressure is detected and emergency services are notified. Our convenient app allows for scheduling, dosages, and data analytics from the perspective of a health care administrator. ## Components ABS Plastic Acetone ABS Slurry - used as internal sealant Silicone sealant - used for the cap and as a failsafe Pnuematics: ball valves, pex pipe, 1/8 and 1/4 inch NPT pipe Raspberry Pi 3 BMC180 Barometer Standard Servo Motors Overall, our hardware list is simple, and we worked to maximize functionality out of few components. Our system is modular, so parts can be replaced and repaired, rather than having to replace the entire unit. ## What it does Bedrock can be explained in two parts: the security system and the dispenser system. The security: The chassis is made of Acrylonitrile butadiene styrene (ABS) plastic. We chose ABS because of its high tensile strength and excellent resistance to physical impact and chemical corrosion. Its high durability rating makes it difficult to physically break into the system in the event that an addict wants access to their pills at the wrong time. The main compartment kept at a pressure of 20PSI, compared to atmospheric pressure's 14.7 PSI. Inside the compartment is a barometric (pressure) sensor that constantly reads the internal pressure in the container. If a user were to attempt to break the dispenser to gain access to their pills, the sealed compartment would be exposed to atmospheric pressure and drop in pressure. Once the barometer detects this pressure drop, it would immediately contact emergency services to investigate the potential overdose so they can be treated as soon as possible. The dispenser: The dispenser can be timed with the interval a doctor sets based on the medication. To maintain the internal air pressure of the compartment, there is a two part dispenser to release a pill. There are two ball valves that can shut to be airtight. First, the innermost valve opens and releases a pill into a chamber. Then, that valve closes and the outermost valve opens. The pill is now accessible and the compartment has never lost any pressure throughout the process. ## How we built it We used a 3-D printer to make all ABS parts including the main enclosure and part of the release mechanism. We used an Acetone ABS slurry to seal the inside of the enclosure to make it airtight and ensure there is minimal fluctuation in pressure during the lifetime of the unit. Other than that, most parts are stock. ## What's next for Bedrock We hope to take Bedrock further on the software side and utilize IoT and wireless software to wirelessly control dosages and timing. Additionally we would like to utilize data analytics with user permission to see what proportions of people are taking their proper dosages at the right times, attempting to consume medication at incorrect time intervals, forget their medication, or attempt to break into their Bedrock device. With this data we would be able to communicate with the pharmaceutical industry and optimize concentrations of medicine for different people's memory periods. Through this, we can work with people's memory timing and patience to ensure proper consumption of potentially dangerous drugs.
## Inspiration Every member of our team has needed medicine in the middle of the night, and realized that we’d run out. This might be okay for over-the-counter medicine like Tylenol, but for prescription medicines - especially those that offer critical, life-saving functionalities and must be taken on time - this is a large problem, one that is actually much more extensive than it seems at first glance. In most cases, a completed bottle of medication means the prescription must be renewed by the doctor even before a refill can be acquired from the pharmacy. Waiting lists for doctor’s appointments are extensive, often on the order of weeks, and many people have work schedules that conflict with the hours pharmacies are open to dispense medication - all presenting unnecessary hindrances to something as crucial as getting medicine. This is why we designed and developed DispenseRX: a Dispensing pipeline for Emergency Resources. ## What it does DispenseRX is an end-to-end prescription medicine pipeline, from facilitating diagnostics to dispensing final prescriptions. Using either a Microsoft Azure based chatbot interface or a phone call connected to RevSpeech’s text-to-speech service, patients can identify the symptoms they are experiencing and either request review for a potential new prescription, or request a renewal of an existing prescription. This request is sent to doctors, who can securely authorize the document using DocuSign’s electronic signature toolkit. Next, this authorization is piped into Rigetti’s quantum computing platform, in which an entangled pair of quantum bits is generated as highly secure authorization tokens. One of the qubits is sent to an authentication server, while the other qubit is sent to the patient, along with a QR code representing the prescription. The patient can drive to an automated prescription dispensing machine, for which we have developed a prototype, and scan the QR code. This identifies the prescription and verifies its authenticity by running multiple reads on the qubit to ensure entanglement with the original reference generated for that prescription. A second layer of authentication is performed by verifying that the user's geographical location corresponds with that of the dispenser. Once integrity is confirmed, the user is prompted to collect their prescription and the medicine is dispensed. ## How we built it We divided up each of the segments of this project (of which there were many) among our team members. One developed the integration with Rigetti's quantum computing platform, alongside a Rigetti representative who marveled at this unique application of quantum computing. One member developed both the mobile app as well as the prototype automated dispenser. One member developed the QR-code and authentication integration frameworks. We developed the chatbot and speech-to-text front-end functionality last, since we focused on creating the hardware prototype and observing its successful functionality, especially in conjunction with cloud-based quantum authentication. ## Challenges we ran into All stages, from hardware to software. For example, we tested four different dev boards and none of our hardware worked until seventeen hours in. Not even the Rigetti representative knew how to implement authentication using quantum computers into a framework like this - we had a unique concept, and so had to develop the entire methodology to integrate this from scratch. We use one phone to demonstrate because during the Hackathon we developed the app for Android, and only one team member had an Android phone. In reality, there will be a standalone digital platform on the dispenser, and it will read the QR codes that will be on customers' phones. In our demonstration, the QR code was shown on the computer, where it was generated. ## Accomplishments that we're proud of Quantum computing for authentication. Viable, robust prototype dispenser mechanism. Novel prescription pipeline. ## What we learned All of the technologies we used. Time management when it comes to teamwork and staying up more than 24 hours straight. ## What's next for DispenseRX Since DispenseRX serves an important and widespread issue within our country’s consumer-side pharmaceutical industry, and those of countries around the world, we want to get it there - we have developed the software to be easily scalable,
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40> ## Inspiration During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods. Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take. ## What it does Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing. ## How we built it Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application. ## Challenges we ran into The consistent challenge that we seemed to run into was the integration between our parts. Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding. ## Accomplishments that we're proud of During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end. Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative. ## What we learned Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies. ## What's next for Medisafe Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
partial
## Inspiration Old technology always had a certain place in our hearts. It is facinating to see such old and simple machines produce such complex results. That's why we wanted to create our own emulator of an 8-bit computer: to learn and explore this topic and also to make it accessible to others through this learning software. ## What it does It simulates the core features of an 8-bit computer. We can write low-level programs in assembly to get executed on the emulator. It also displays a terminal out to show the results of the program, as well as a window on the actual memory state throughout the program. ## How we built it Using Java, Imgui and LWJGL. ## Challenges we ran into The custom design of the computer was quite challenging to get to, as we were trying to keep the project reasonable yet engaging. Getting information on how 8-bit computers work and understanding that in less than a day also proved to be hard. ## Accomplishments that we're proud of We are proud to present an actual working 8-bit computer emulator that can run custom code written for it. ## What we learned We learned how to design a computer from scratch, as well as how assembly works and can be used to produce a wide variety of outputs. ## What's next for Axolotl Axolotl can be improved by adding more modules to it, like audio and a more complex gpu. Ultimately, it could become a full-fledged computer in itself, capable of any prowess anynormal computer can accomplish
## Inspiration Driven by the harrowing reality that civilians comprised a staggering 95% of all cluster munition casualties in 2022, we were impassioned to devise a solution that could confront this pressing humanitarian crisis head-on. With up to 40% of these munitions failing to explode upon impact, the threat to civilian lives persists long after conflict ceases. In regions like Laos, where between 1964 and 1973, 260 million cluster bomblets were dropped, and a chilling 80 million failed to detonate, the urgency of our mission became abundantly clear. Inspired by the imperative to safeguard innocent lives in post-war zones, we embarked on the development of Project Horus. ## What it does Project Horus utilizes cutting-edge technology to detect and localize unexploded cluster munitions in conflict-affected areas. Using a Parrot drone equipped with a custom-trained Convolutional Neural Network (CNN), our system autonomously scans vast territories, identifying potential threats. The drone's onboard Inertial Measurement Unit (IMU) aids in precisely localizing the detected munitions, even in electronic warfare or GPS-denied environments. Results are visualized in real-time as a heatmap, providing actionable insights to demining teams and enabling targeted removal of these deadly remnants of war. ## How we built it We started by creating a robust training dataset by printing out images of cluster munitions, placing them on the ground, and capturing a video to partition into frames. Our CNN, built using TensorFlow, achieved an impressive 98% validation accuracy after rigorous training. Integration with the Parrot drone involved deploying grid search algorithms to autonomously search for munitions, leveraging the drone's capabilities for real-time bomb detection. Overcoming challenges such as uploading custom assets into the simulator and optimizing the model to run efficiently on non-hardware accelerated systems, we created a streamlined solution ready for deployment in the field. ## Challenges we ran into Throughout the development process, we encountered several challenges, including difficulties uploading custom assets into the simulator and optimizing the CNN to run efficiently on hardware-constrained systems. Despite these hurdles, we adapted our approach, moving the computer vision processing off the drone and onto a separate system, ensuring our solution remained viable and effective. ## Accomplishments that we're proud of We're proud to have successfully implemented grid search and bomb-detection capabilities on the Parrot drone, as well as leveraging the IMU for precise localization of munitions. Additionally, visualizing the detection results in a heatmap provides actionable intelligence for demining teams, marking a significant achievement in our mission to save civilian lives. ## What's next for Project Horus Moving forward, we aim to enhance the robustness of our model by incorporating synthetic and real-world data for fine-tuning. Furthermore, we envision implementing multi-agent reinforcement learning techniques to enable collaborative scanning by fleets of drones, enhancing efficiency in covering large areas. Ultimately, we aspire to deploy Project Horus alongside demining teams in regions such as Ukraine, Israel, Afghanistan, and Southeast Asia, contributing to the safe removal of cluster munitions and the protection of civilian populations.
## Inspiration Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb! ## What it does The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge". ## How we built it **The Explosive** The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates. **The Code** Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation. Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol. The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication. ## Challenges we ran into Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided. Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process. Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below: ## Accomplishments that we're proud of During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module. ## What we learned * Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules. * Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development. * Enhancing overall group performance is achieved by assigning individual tasks. ## What's next for Keep Hacking and Nobody Codes * Ensure the elimination of any unwanted noises in the wiring between the main board and game modules. * Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players. * Release the game to a wider audience, allowing more people to enjoy and play it.
partial
## Inspiration The Language Barrier is a huge factor that determines the efficacy of communication in a new language, not to mention the mental health impact and social exclusion that comes with not being able to effectively communicate in a foreign language. We hoped to create a Google Chrome extension that can leverage the power of the International Phonetic Alphabet (IPA), which can be learned quickly and easily in one’s native tongue, in order to assist in pronunciation. Moreover, we wanted to assist specifically in pronunciation with a visual guide along with tips to help one pronounce words and phrases on the fly, using the internet as a reading guide. We believe this can help individuals hoping to improve their pronunciation in a new language and those undergoing speech therapy. ## What it does This app is a Google Chrome extension that will take any English text found on any online webpage and translate it into the International Phonetic Alphabet, coupled with an animation! The goal is to make language learning easy, with built in diagrams to tell you exactly how to pronounce each syllable. Ideally, it would include text to speech examples to accompany the translation, but unfortunately this was not implemented at treehacks. ## How we built it The IPA+ plugin was built with a statically-built React frontend that controlled the logic for storing highlights and making API calls for translating English to IPA. We used Python scripts to facilitate the translation and hosted the platform on a Django server and database. Once translated, the resulting output is rendered with mouth animations for each syllable. IPA+ is designed with CLIP Studio Paint Pro for graphic design/animation. ## Challenges we ran into None of us were experienced in developing extensions for Chrome prior to this, so many of our issues revolved around interpreting the Chrome documentation. When it came to voice recognition, we struggled to implement a handful of speech-to-text and text-to-speech APIs, joining the front end with the back, and determining the best visuals to match the sounds. Moreover, this was our first time working with Soundify and SoundHound, which we attempted to learn from scratch during this project. ## Accomplishments that we're proud of We created a new and intuitive image set in order to assist people in developing proper pronunciation. ## What we learned We learned the benefits of IPA and the use of images to assist in speech and pronunciation. With regards to coding our project, we learned deploying React apps, making chrome extensions, making Django API calls efficient, node packaging and more! ## What's next for IPA+ We would like to implement text-to-speech voice clip examples! Additionally, in an ideal world, we would be able to take voice clips and tell you exactly where your pronunciation could improve compared to the accepted IPA standard. We understand not everyone is able to immediately understand IPA, and we hope to broaden its comprehension, so we would like to work to make this more accessible to all and more user-friendly. Eventually, we would also like to expand to include a handful of languages outside of English.
## Inspiration Nearly 10% of all children experience speech-related disorders. But speech therapy can cost more than $31,000 each year. What if there was a simple, accessible, and affordable alternative to practicing speech/pronunciation? ## What it does SpeechMe provides an intuitive way to practice speech without spending $100 ~ $150 each year on speech therapists. It is an iOS app that allows the user to input a word to practice along with a recording of their pronunciation using our user-friendly interface, then gives feedback on the input audio using our backend algorithm. ## How we built it Our program architecture consists of three main components: 1. Swift Frontend 2. Python Backend API 3. Python Backend Analyzer The usual flow of a use case is as follows: The Swift frontend prompts the user for an input word that the user wants to practice, then allows them to record an audio file. The frontend takes these inputs and sends them to our backend API in a POST request as multi-part form data. Our Flask API receives this data and sends the audio file to AssemblyAI’s audio-to-text API. Once it receives the text version of the input audio, our backend analyzer takes in the two strings (user input word and text-converted version of the input audio) and runs the Jaro-Winkler algorithm. This algorithm returns a similarity score between 0 and 1, which we scale into an integer from 0 ~ 100 to be then returned to the frontend. Lastly, the frontend takes the return value from the POST request and displays it to the user as the score. ## Challenges we ran into Throughout this hackathon, we ran into numerous challenges. With all of us being first-time hackers and not having taken any web-dev courses, our frontend team had never touched iOS development before, and our backend team had no experience in APIs at all. Nevertheless, we decided to take on the challenge! One of our largest challenges was figuring out how to combine the SwiftUI frontend and the Python backend. Especially with the data transfer our program required from the frontend to the backend, then back to the frontend, we figured we needed to build our own API. Even so, we struggled even more with constructing POST requests to our API. Initially, we attempted to encode the audio file into a string and send JSON data with the input text and the input audio (as a binary string). However, the encoded audio file turned out to be too long, so we had to try to use multi-part form data instead. There was no easy way, however, to construct our data in the correct format for multi-part form data, and unfortunately, we were not able to successfully implement this POST request. Nevertheless, assuming successful integration, our program works as one would expect, and we were able to reach a point much further than we had imagined! ## Accomplishments that we're proud of We are very proud of our beautiful and clean user interface. It is both intuitive and simple to use, so it can be used both by children and adults. Especially given that this was our frontend team’s first time ever touching iOS development and SwiftUI, we are very proud of what we’ve accomplished! We are also proud of creating our very own API for the first time, integrated smoothly with an external API (by AssemblyAI) and a backend that perfectly runs our algorithm on the string returned from the AssemblyAI API. The moment 200 OK showed on our screen was definitely one of our most memorable moments of this hackathon. ## What we learned We learned the most we could have ever learned in the span of two days. First, we learned how to use Swift, build an API, use an external API, and the integration process of frontend and backend. On the other hand, we also learned that when we aim high for a project, we must also be ready to pivot or take a slightly different approach at any point throughout the development process. We remained stuck on trying to get our ideal architecture to work, and it perhaps would have helped to think of other ways to make our product come alive. ## What's next for SpeechMe Our future plans are as follows! 1. **Successful backend/frontend integration**: We would love to see our app successfully working with our backend! It would be cool to finally figure out how to organize multi-part form data to send as a POST request. 2. **Personalization**: We would love to implement user personalization! We can have a login feature that saves starred words so that users can go back to saved words later on to practice them more. 3. **Text to Speech**: A text-to-speech feature that plays back what the word should sound like would be a great addition, too.
## Inspiration Since the beginning of the hackathon, all of us were interested in building something related to helping the community. Initially we began with the idea of a trash bot, but quickly realized the scope of the project would make it unrealistic. We eventually decided to work on a project that would help ease the burden on both the teachers and students through technologies that not only make learning new things easier and more approachable, but also giving teachers more opportunities to interact and learn about their students. ## What it does We built a Google Action that gives Google Assistant the ability to help the user learn a new language by quizzing the user on words of several languages, including Spanish and Mandarin. In addition to the Google Action, we also built a very PRETTY user interface that allows a user to add new words to the teacher's dictionary. ## How we built it The Google Action was built using the Google DialogFlow Console. We designed a number of intents for the Action and implemented robust server code in Node.js and a Firebase database to control the behavior of Google Assistant. The PRETTY user interface to insert new words into the dictionary was built using React.js along with the same Firebase database. ## Challenges we ran into We initially wanted to implement this project by using both Android Things and a Google Home. The Google Home would control verbal interaction and the Android Things screen would display visual information, helping with the user's experience. However, we had difficulty with both components, and we eventually decided to focus more on improving the user's experience through the Google Assistant itself rather than through external hardware. We also wanted to interface with Android things display to show words on screen, to strengthen the ability to read and write. An interface is easy to code, but a PRETTY interface is not. ## Accomplishments that we're proud of None of the members of our group were at all familiar with any natural language parsing, interactive project. Yet, despite all the early and late bumps in the road, we were still able to create a robust, interactive, and useful piece of software. We all second guessed our ability to accomplish this project several times through this process, but we persevered and built something we're all proud of. And did we mention again that our interface is PRETTY and approachable? Yes, we are THAT proud of our interface. ## What we learned None of the members of our group were familiar with any aspects of this project. As a result, we all learned a substantial amount about natural language processing, serverless code, non-relational databases, JavaScript, Android Studio, and much more. This experience gave us exposure to a number of technologies we would've never seen otherwise, and we are all more capable because of it. ## What's next for Language Teacher We have a number of ideas for improving and extending Language Teacher. We would like to make the conversational aspect of Language Teacher more natural. We would also like to have the capability to adjust the Action's behavior based on the student's level. Additionally, we would like to implement a visual interface that we were unable to implement with Android Things. Most importantly, an analyst of students performance and responses to better help teachers learn about the level of their students and how best to help them.
losing
## Inspiration COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse. Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general. ## What it does In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed. After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion **Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.** **Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.** ## How we built it Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS. ## Challenges we ran into Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it. ## Accomplishments that we're proud of We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility. And our two last minute accessibility add-ons! ## What we learned It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective. Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees. ## What's next for empath.ly From short-term to long-term stretch goals: * We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience. * We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions. * We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user.
## Inspiration Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life. Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life. ## What it does Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text. ## How we built it The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app. --- The back-end service is written in Go and is served with NGrok. --- We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud. --- We make use of the Google Vision API in three ways: * To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc. * To run Optical Character Recognition on text in the real world which is then read aloud to the user. * For label detection, to indentify objects and surroundings in the real world which the user can then query about. ## Challenges we ran into There were a plethora of challenges we experienced over the course of the hackathon. 1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go. 2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded. 3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go. ## Accomplishments that we're proud of Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app. Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software. We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app. ## What we learned Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack. Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis. Zak learned about building a native iOS app that communicates with a data-rich APIs. We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service. Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges. ## What's next for Sight If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app. Ultimately, we plan to host the back-end on Google App Engine.
## Inspiration All three members of the team have had personal experiences with people on the autism spectrum, and seen firsthand their struggles with deciphering emotions. With the recent coronavirus pandemic, the use of online conference programs such as Zoom have exacerbated the issue of reading emotional cues. This is not only a problem for the 75+ million people struggling with ASD worldwide, but also for people with alexithymia, a condition that affects individuals’ ability to understand emotion (which makes up 10% of the world population). Our team decided to solve this problem through the development of Evatone, a tool that provides assistance in emotion identification on video conferencing platforms. ## What it does Our tool is emotion detection software specifically incorporated for video conferencing. It allows users to see what major expressions they elicit as they speak and identify major emotions expressed by the other participants in the video conference. This appears as labels on video participants’ faces, allowing people with Autism and alexithymia to quickly gauge the emotions of others in the meeting room. ## How we built it We created a live webcam feed using Typescript (part of our front end) that takes in video input. At set intervals, we send a frame from the video as a JPG image to the Python backend using Flask REST APIs. In our backend, we used the Hume Expression Measurement streaming API, along with a web socket to maintain an open connection, to analyze the facial expression of the frames in real-time, and detect the emotion. We parsed the output of Hume’s list of emotions detected to include only the emotion with the highest score (as this represented the dominant emotion), and then sent this data back to the front end (Typescript) to display it on our live webcam feed. Using cv2’s face detection model paired with div elements part of our html, we then overlay a red box over the face detected in the webcam with the dominant emotion label. ## Challenges we ran into One challenge we ran into was ensuring that the live webcam feed was running the entire time our application was running. Originally, we built out both the live webcam feed and the backend with Python, using Python’s cv2 library to capture the frames. However, we found that the cv2 webcam frames conflicted with how we were sending frames to Hume’s API. Thus, we transitioned to using Typescript for the webcam and Python for the backend, which allowed us to run the webcam feed the whole time while simultaneously sending frames to our model. ## Accomplishments that we're proud of We are proud of being able to connect the front end and back end of our code, seamlessly presenting our backend output and data in a more presentable way and with a much better user interface, using front-end code. We are also proud that we were able to learn how to use Hume’s API. On Saturday, we spent hours at Hume’s table debugging our code and learning how web sockets work to allow real-time continuous detection of emotion in facial expressions. We’re really proud that, while we ran into challenges with using Hume’s streaming API, we pushed through, asked questions, and got our final output! ## What we learned We learned how to work with APIs, as we navigated the Hume API to incorporate it into our code for facial emotion detection. We learned about web sockets and how they allow for a continuous connection. We also learned how to code in TypeScript and how to use Flask as a framework for connecting HTML/CSS/JavaScript with our backend Python code using POST and GET. ## What's next for Evatone We envision Evatone to be incorporated into online meeting platforms like Zoom, Google Meet, and Microsoft Teams. Furthermore, we see Evatone increasing accessibility for people with autism across the online sphere. We have thought about making a Chrome extension that can scan for any face currently on screen, whether that belongs to a Zoom call or a YouTube video, and similarly detect the emotion of the face, helping people with autism navigate online social interactions with ease.
winning
## Inspiration We wanted to be able to connect with mentors. There are very few opportunities to do that outside of LinkedIn where many of the mentors are in a foreign field to our interests'. ## What it does A networking website that connects mentors with mentees. It uses a weighted matching algorithm based on mentors' specializations and mentees' interests to prioritize matches. ## How we built it Google Firebase is used for our NoSQL database which holds all user data. The other website elements were programmed using JavaScript and HTML. ## Challenges we ran into There was no suitable matching algorithm module on Node.js that did not have version mismatches so we abandoned Node.js and programmed our own weighted matching algorithm. Also, our functions did not work since our code completed execution before Google Firebase returned the data from its API call, so we had to make all of our functions asynchronous. ## Accomplishments that we're proud of We programmed our own weighted matching algorithm based on interest and specialization. Also, we refactored our entire code to make it suitable for asynchronous execution. ## What we learned We learned how to use Google Firebase, Node.js and JavaScript from scratch. Additionally, we learned advanced programming concepts such as asynchronous programming. ## What's next for Pyre We would like to add interactive elements such as integrated text chat between matched members. Additionally, we would like to incorporate distance between mentor and mentee into our matching algorithm.
## Inspiration Civic authorities are constantly working to improve transport for their cities. Their efforts, however, often lack an important piece: access to accurate data about how current transport systems are used. A better understanding of how people commute, share rides, and take public transport today will inform how these systems can be better tomorrow. ## What it does In essence, Argonaut is a marketplace: it helps users part with any personal data that they're willing to share, in return for monetary incentives in the form of Algorand coins (Algos). Authorities can buy data packs - which are anonymised and aggregated clusters of such data submitted by multiple users, for a price, which is then distributed amongst each of the contributors equally. Here's the key: the blockchain keeps a track record of what organisation is requesting and accessing what data, even while maintaining complete anonymity on the part of the sellers. ## How I built it We used React.js to build the frontend, a Node.js/Express powered backend; Python to scrape through personal data dumps accessed via Google Takeout, and the Algorand JavaScript SDK to implement the blockchain-based transaction management system. ## Challenges I ran into The principal challenge we faced was how to implement Blockchain for managing access to personal data. Another important challenge was to do with making sure that the anonymity of users sharing their data is maintained, whilst also keeping any shared data private. ## Accomplishments that I'm proud of This was our first deep dive into Blockchain technology, and we were glad to be able to use Algorand's APIs and Dev Tools to be able to build a platform aimed at enhancing urban decision making, while simultaneously helping people like you and us take back control of their personal data. ## What I learned We learned about the power of the blockchain, and how decentralised ledgers have applications far and beyond the traditional financial markets that we've currently seen them in. ## What's next for Argonaut Argonaut can be scaled up to include the ability for users to connect Google Maps/Uber/Lyft and a lot of other apps directly to the platform so that their periodic data dumps can be seamlessly and automatically streamed into the platform, versus the current requirement of having to upload personal data dumps.
## Inspiration During these trying times, the pandemic impacted many people by isolating them in their homes. People are not able to socialize like they used to and find people they can relate with. For example, students who are transitioning to college or a new school where they don’t know anyone. Matcher aims to improve students' mental health by matching them with people who share similar interests and allows them to communicate. Overall, its goal is to connect people across the world. ## What it does The user first logs in and answers a series of comprehensive, researched backed questions (AI determined questions) to determine his/her personality type. Then, we use machine learning to match people and connect them. Users can email each other after they are matched! Our custom Machine Learning algorithm used K-Means Algorithm, and Random Forest to study people's personalities. ## How we built it We used React on the front end, Firebase for authentication and storage, and Python for the server and machine learning. ## Challenges we ran into We all faced unique challenges but losing one member mid way really damped our spirits and limited our potential. * Gordon: I was new to firebase and I didn’t follow the right program flow in the first half of the hackathon. * Lucia: The challenge I ran into was trying to figure out how to properly route the web pages together on React. Also, how to integrate Firebase database on the Front End since I never used it before. * Anindya: Time management. ## Accomplishments that we're proud of We are proud that we are able to persevere after losing a member but still managing to achieve a lot. We are also proud that we showed resiliency when we realized that we messed up our program flow mid way and had to start over from the beginning. We are happy that we learned and implemented new technologies that we have never used before. Our hard work and perseverance resulted in an app that is useful and will make an impact to people's lives! ## What we learned We believe that what doesn't kill you, makes you stronger. * Gordon: After chatting with mentors, I learnt about SWE practises, Firebase flow, and Flask. I also handled setback and failure from wasting 10 hours. * Lucia: I learned about Firebase and how to integrate it into React Front End. I also learned more about how to use React Hooks! * Anindya: I learned how to study unique properties of data using unsupervised learning methods. Also I learned how to integrate Firebase with Python. ## What's next for Matcher We would like to finish our web app by completing our integration of the Firebase Realtime Database. We plan to add social networking features such as a messaging and video chat feature which allows users to communicate with each other on the web app. This will allow them to discuss their interests with one another right at our site! We would like to make this project accessible to multiple platforms such as mobile as well.
partial
## Inspiration Given the recent news developments surrounding *GameStop stocks* and the phenomenon of small shareholder-based short-squeezing, we were really interested in *the stock market* and why Wall Street was turned upside down. Unfortunately, *in school, there aren't a lot of opportunities to learn about how stock and shares work*, so we quickly felt overwhelmed with the wealth of information on the Internet. We felt like this was *a great opportunity to encourage kids to become more business-savvy from a young age*. ## What it does Monkey Market is a simple and easy-to-play game on our website that allows you to **buy and sell shares, called "bananas"**, all while keeping an eye on patterns in the prices and randomized news events. The goal is to **make as much money as you can with the 5000 Monkey Money (MM)** that you're given in the beginning. Keep track of the Monkey Monkey you've got and try to **buy and sell at the right moments to increase your overall net worth and Monkey Money** and be the richest business monkey ever! ## How we built it Monkey Market was built with **front-end languages (HTML, CSS, and JavaScript)**. We made use of the **Google Chart API** to display our data using **multidimensional objects to store our stock data, creating functions that handled the math behind buying and selling, randomizing stock fluctuations, timing price updates, and randomizing news events that impacted prices using the various JS functions**. Simple **CSS animations** were also used, as well as **vectr.com to create SVGs** for the website and the digital art software **Krita for our home page's hero image**. ## Challenges we ran into We had a lot of difficulty **implementing the chart properly**. It was definitely one of the biggest challenges of the whole project, since we'd never worked with the API before! We struggled with getting a chart up at all, and then making the chart display our information was even harder. The bulk of development time was probably spent working on making the chart continually update with the new information. Along with the chart, we had trouble **implementing the scrolling frame on the side that would show price fluctuations and inventory, as well as the updating container that showed the next update, next day, etc.** We also struggled with the randomized news events. Unfortunately, we could not get company-specific events to work and had to settle for news events that impacted all company stock prices. Another difficulty we faced was **upping our HTML/CSS game**. We wanted our website to reflect its young audience and give it a fun and colourful vibe; to do this, we had to use CSS animations for the first time and come up with a monkey-themed colour palette. This became a bit of a challenge as we tried to figure out how much animation was too much, and we had to try to create a website that was still aesthetically-pleasing while suitable for children. ## Accomplishments that we're proud of We are really proud of the entire website as a whole. As this is just our second hackathon (and the first with just the two of us), we are beyond happy with our final product. **Getting the chart API and the stock data working** seemed almost impossible to us at first, but in the end, we achieved everything we had wanted to accomplish. We're also immensely proud of how the **website's appearance** turned out. It looks fun and silly, but it definitely looks pretty put-together! This was also our first time **making vector graphics**, and despite their simplicity, we feel like it adds a lot to the website and really ties off the whole monkey theme. It's way better than our first hackathon project, and we're so happy that we managed to do this with just the two of us working together. We're also very happy with **our presentation video**, since it's the first time we used a video to present our project. It's just as light-hearted as our project, and we love how it turned out! ## What we learned We learned many new things about **working with JavaScript**. Though we both have experience working with Java, JS is different enough that we could only apply so much of our Java knowledge. We're a lot more familiar with **working with APIs** in JS now, and feel confident about using this new skill in future hackathon and even school projects. Furthermore, we're really excited to continue **developing beautiful websites with CSS**. This website was the first website we've developed where we've used a significant amount of simple animations to liven up the pages. We feel as though we've both learned a lot about front-end development from this project. ## What's next for Monkey Market Because we could not get the **company-specific randomized events** to work, that's definitely our next goal. We would love if we could also start implementing more complicated features in Monkey Market, like the idea of **borrowing stocks**. **Real-time stock fluctuations** would also be interesting, and we're thinking of even adding a **database to allow for accounts** that can hold onto your progress in Monkey Market! (That last one would be especially difficult, but we're willing to try!)
## Inspiration A couple weeks ago, we went to play bingo. The more elderly participants were lightning quick, even with many bingo cards in front of them. Once, we won one of the rounds but we were too slow to notice! Not to be outdone, we wanted to make a robot that could beat any person at bingo. ## What it does The BingoBot is a Bingo machine that can get activated by voice. It will take instructions from the caller and mark the corresponding number on the Bingo Sheet. It has an x-axis and a y-axis which will direct the bingo dabber to the right number. A mechanism on the z-axis will make the dabber stamp the designated spot. Erasing the chance of missing numbers and losing out on your winnings. ## How I built it A CAD model was first constructed for visualizing purpose. Most of the components for the physical components were purchased from Home Depot before the hack. The machining of the parts was done during the hack in the university machine shop. The movement of the two sliders was made possible by belts that are mounted on a motor and a gear on each side. The software uses React for the front end with a flask back end supporting it. The program uses tesseract and the Google Speech API to correctly recognize the correct numbers being called. ## Challenges I ran into Mechanical challenge: 1. The slider for the x and y coordinates were sensitive to the position of the support and if the framework is not precisely positioned, the slider would not move smoothly 2. We needed to design a mechanism for the bingo dauber press down but we encountered problems such as getting not enough force from the z-axis motor 3. Arduino had some problem, but we, later on, realized there was a small mistake with our code 4. Tesseract was challenging to use correctly with the correct filters to cancel out noise in a photo. The gridlines of the bingo square were especially tricky to remove. ## Accomplishments that I'm proud of 1. We got the sliders to work ! 2. We built all the components from scratch 3. Each person on the team learned something new ## What I learned 1. Trust your teammates 2. Take videos of your project in case it suddenly stops working. ## What's next for BingoBot Faster motors will allow for better utility in real life scenarios. Currently, the motors are not strong enough to move the belts easily. With new motors, the BingoBot will be better than ever! Lastly, we believe we can make bingo more fun for all ages.
## Inspiration While searching for ideas, our team came across an ECG dataset that was classified and labeled into categories for normal and abnormal patterns. While examining the abnormal patterns, it was observed that most seizure patterns had a small window that transitioned from a typical pattern to a seizure pattern around a 15-second window. Most of the accidents and damage in seizures are caused by falling, lack of help, or getting caught in disadvantaged situations -driving, cooking, etc.- detecting this short period in real-time and predicting a seizure with machine learning to warn the user seemed as somewhat of a viable solution. After this initial ideation, sharing patient data such as allergies, medicinal history, emergency contacts, previous seizures, and essential information at the moment of attack for the emergency workers was thought out in the case of unresponsiveness from the user to the app's notification. ## What does it do? The system contains the following: Three main agents. A smartwatch with an accurate ECG sensor. A machine learning algorithm on the cloud. An integrated notification app on mobile phones to retrieve patient information during attacks. The workflow includes a constant data transfer between the smartwatch and the machine learning algorithm to detect anomalies. If an attack is predicted, a notification prompts the user to check if this is a false positive prediction. Nothing is triggered if the user confirms nothing is wrong and dismisses the warning. The seizure protocol starts if the notification stays unattended or is answered as positive. Seizure Protocol Includes: -The user is warned by the prediction and should have found a safe space/position/situation to handle the seizure -Alarms from both synced devices, mobile, and smartwatch -Display of the FHIR patient history on the synced device, allergies, medicinal data, and fundamental id info for emergency healthcare workers -Contacting emergency numbers recorded for the individual With the help of the app, we prevent further damage by accidents by predicting the seizure. After the episode, we help the emergency workers have a smoother experience assisting the patient. ## Building the System We attempted to use Zepp's smartwatch development environment to create a smartwatch app to track and communicate with the cloud (Though there have been problems with the ECG sensors, which will be mentioned in the challenges section.) For the machine learning algorithm, we used an LSTM model (Long-Short Term Memory Networks) to slice up the continuously fed data and classify between "normal" and "abnormal" states after training it on both Mit-Bih Epileptic Seizure Recognition datasets we have found. If the "abnormal" form has been observed for more than the threshold, we have classified it as a "seizure predictor." When the state changed to the seizure protocol, we had two ways of information transfer, one is to the smartwatch as an alarm/notification, and the other one to the synced mobile app to display information. For the mobile app, we have created a React Native app for the users to create profiles and transfer/display health information via the InterSystem's FHIR.js package. While in the "listening" state, the app waits for the seizure notification. When it receives it, it fetches and displays health information/history/emergency contacts and anything that can be useful to the emergency healthcare worker on the lock screen without unlocking the phone. Thus, providing a safer and smoother experience for the patient and the healthcare workers. ## Challenges There have been several challenges and problems that we have encountered in this project. Some of them stayed unresolved, and some of them received quick fixes. The first problem was using the ECG function of the Zepp watches. Because the watch ECG function was disabled in the U.S. due to a legal issue with the FDA, we could not pull up live data in the Hackathon. We resolved this issue by finding a premade ECG dataset and doing the train-test-validation on this premade dataset for the sake of providing a somewhat performing model for the Hackathon. The second problem we encountered was that we could only measure our accuracy with our relatively mid-sized dataset. In the future, testing it with various datasets, trying sequential algorithms, and optimizing layers and performance would be advised. In the current state, without a live information feed and a comprehensive dataset, it is hard to be sure about the issues of overfitting/underfitting the dataset. ## Accomplishments We could create a viable machine learning algorithm to predict seizures in a concise time frame, which took a lot of effort, research, and trials, especially in the beginning since we switched from plain RNN to LSTM due to the short-time frame problem. However, our algorithm works with a plausible accuracy (Keeping in mind that we cannot check for overfitting/underfitting without a diverse dataset). Another achievement we are proud of is that we attempted to build a project with many branches, like ReactApp, Zepp Integration, and Machine Learning in Python, which forced us to experience a product-development process in a super-dense mode. But most importantly, attending the Hackathon and meeting with amazing people that both organized, supported, and competed in it was an achievement to appreciate! ## Points to Take Home The most discussed point we learned was that integrating many APIs is a rather daunting process in terms of developing something within 24 hours. It was much harder to adapt and link these different technologies together, even though we had anticipated it before attempting it. The second point we learned was that we needed to be careful about our resources during the challenges. Especially our assumption about the live-data feed from the watch made us stumble in the development process a bit. However, these problems make Hackathons a learning experience, so it's all good! ## Future for PulseBud The plans might include sharpening the ML with a variety of dense and large-scale datasets and optimizing the prediction methodology to reach the lowest latency with the highest accuracy. We might also try to run it on the watch itself if it can get a robust state like that. Also, setting personalized thresholds for each user would be much more efficient in terms of notification frequency if the person is an outlier. Also, handling the live data feed to the algorithm should be the priority. If these can be done to the full extent, this application can be a very comfortable quality of life change for many people who experience or might experience seizures.
losing
## Inspiration: As per the Stats provided by Annual Disability Statistics Compendium, 19,344,883 civilian veterans ages 18 years and over live in the community in 2013, of which 5,522,589 were individuals with disabilities . DAV - Disabled American Veterans organization has spent about $ 61.8 million to buy and operate vehicles to act as a transit service for veterans but the reach of this program is limited. Following these stats we wanted to support Veterans with something more feasible and efficient. ## What it does: It is a web application that will serve as a common platform between DAV and Uber. Instead of spending a huge amount on buying cars the DAV instead pay Uber and Uber will then provide free rides to veterans. Any veteran can register with his Veteran ID and SSN. During the application process our Portal matches the details with DAV to prevent non-veterans from using this service. After registration, Veterans can request rides on our website, that uses Uber API and can commute free. ## How we built it: We used the following technologies: Uber API ,Google Maps, Directions, and Geocoding APIs, WAMP as local server. Boot-Strap to create website, php-MyAdmin to maintain SQL database and webpages are designed using HTML, CSS, Javascript, Python script etc. ## Challenges we ran into: Using Uber API effectively, by parsing through data and code to make javascript files that use the API endpoints. Also, Uber API has problematic network/server permission issues. Another challenge was to figure out the misuse of this service by non-veterans. To save that, we created a dummy Database, where each Veteran-ID is associated with corresponding 4 digits SSN. The pair is matched when user registers for free Uber rides. For real-time application, the same data can be provided by DAV and that can be used to authenticate a Veteran. ## Accomplishments that we're proud of: Finishing the project well in time, almost 4 hours before. From a team of strangers, brainstorming ideas for hours and then have a finished product in less than 24 hours. ## What we learned: We learnt to use third party APIs and gained more experience in web-development. ## What's next for VeTransit: We plan to launch a smartphone app that will be developed for the same service. It will also include Speech recognition. We will display location services for nearby hospitals and medical facilities based on veteran’s needs. Using APIs of online job providers, veterans will receive data on jobs. To access the website, Please register as user first. During that process, It will ask Veteran-ID and four digits of SSN. The pair should match for successful registration. Please use one of the following key pairs from our Dummy Data, to do that: VET00104 0659 VET00105 0705 VET00106 0931 VET00107 0978 VET00108 0307 VET00109 0674
# Don't Dis My Ability ## 💡 Inspiration Post pandemic has seen a tremendous surge in people's inclination towards solo traveling and "workstation" and the era of solo traveling has also experienced a rise in specially-abled solo travelers stepping out and exploring the world. In researching the solo travel experiences and issues faced by specially-abled travelers, we identified two major problems faced by the people which are: i) Getting Money and Using The ATM- Even after carrying sufficient local currency, many specially-abled travelers are bound to use the ATM at least once to retrieve cash and in situations like these, many travelers have to either reach out to a fellow traveler or a staff member of accommodation and trusting them with sensitive banking information. ii) Difficulty in ordering food- Travelling to countries with diverse cultures is often reflected in the diverse variety of dishes available and this can often cause a problem of not knowing what to eat especially for a specially-abled person having vision and hearing impairments. ## 💻 What it does To make solo travel convenient for specially-abled people we bring to you "Don't Dis My Ability" a web app working on ensuring a smooth and easy-going solo travel experience for the differently-abled people. The website uses Google Vision API for OCR implementation to retrieve the necessary information which is then converted to speech for ease in understanding. The information retrieved is also converted to required language. This also ensures the screen reading technology is not being fooled and only focuses on necessary information making the process of ordering food at a restaurant simplified for the specially-abled traveler. The web app also allows micropayment to bypass the hassle of making visits to the ATMs and sharing sensitive banking information with strangers for help by allowing direct payment from native currency via payment gateways. ## ⚙️ How we built it * Figma: For design * DeSo: For user authentication * React.js: For frontend * Python: For backend * Google Vision API: OCR * Payment Gateway: Razorpay API (It provides lots of features like UPI, direct payment from native currency etc.) * Text to Speech: react-speech-kit * For multilingual: i18n ## ✈ Travel Track Our team embarked on a journey to create an adaptive user interface for specially-abled people that made it easier for them to use technology whenever they travel abroad. We spearheaded the project by creating a platform that allows users to: * Upload the menu and extract the information from it * Convert the extracted text from the menu into speech * Allow micropayments to be made using Razorpay * Classify the food as vegetarian or non-vegetarian * Get nutrition information about the food, such as its ingredients and its calories * Change the language of the website to the user's preferred language ## 📚 Research Research is paramount to gaining a full understanding of the user and their needs. Beyond our own experiences, we needed to dig deeper into the web, outside of our network, and find both organizations that could shed light on how better to help our targeted users as well as to conduct research into other similar applications or products. This was crucial to avoid re-inventing the wheel and wasting valuable time and resources. Here are a few of the resources that were helpful to us: * <https://www.narayanseva.org/blog/10-problems-faced-by-people-with-disabilities> * <https://www.sagetraveling.com/25-things-that-can-go-wrong-traveling-with-a-disability> * <https://www.digitalartsonline.co.uk/features/interactive-design/how-design-websites-for-disabled-people-in-2017/p> ## 🤝 Most Creative Use of GitHub We are using GitHub for the following reasons: * **Collaboration**: GitHub makes it easy to share code with others and helps a lot in collaboration. * **GitHub Project**: We also used GitHub for planning and keeping track of our project and its progress with the help of the GitHub project management tool. * **Implementing the CI/CD workflow**: GitHub makes it easy to implement the CI/CD workflow and makes the deployment process easy. * **Deploying the project**: Deploying the project on GitHub helped us to get the project deployed on the network to be accessed by other people. * **Using PRs and Issues**: We are doing multiple PRs and building multiple issues to keep on track of the project. ## 🔐 Best Use of DeSo We are using **DeSo** to make a secure user authentication. DeSo is the first Layer 1 blockchain custom-built for decentralized social media applications. ## 🌐 Best Domain Name from Domain.com * dontdismyability.tech ## 🧠 Challenges we ran into Due to the difference in the time zone it was a bit difficult to collaborate with other developers in the team but we managed to get the project done in time. Complete the project in the given time frame. ## Accomplishments that we're proud of Our team embarked on a journey to create an adaptive user interface for specially-abled people that made it easier for them to use technology whenever they travel abroad. We spearheaded the project by creating a platform that allows users to: * Upload the menu and extract the information from it * Convert the extracted text from the menu into speech * Allow micropayments to be made using Razorpay * Change the language of the website to the user's preferred language ## 📖 What we learned * Collaboration with other developers. * Implementing the payment gateway and google cloud ## 🚀 What's next for Don't Dis My Ability * Building a mobile app for the project. * Simplifying the process of accommodation by providing a proper description of rooms, advancing eliminating the graphic capture and overcrowding of webpages, following international standards on color contrasts, etc. * Using NLP and ML to simplify the whole travel experience on the data being retrieved using vision AI API
## Inspiration One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss. Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently. ## What it does ### Object Identification: Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback. ### Facial Recognition: It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment. ### Interactive Question Answering: Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics. ### Voice Commands: Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra. ## How we built it * Python * OpenCV * GCP & Firebase * Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency) ## Challenges we ran into * Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk) * Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds. * Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard ## Accomplishments that we're proud of We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off. ## What we learned * How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users. * How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations. How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations. How to create docker containers to deploy google cloud-based flask applications to host our dashboard. How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user. ## What's next for Saight ### Optimizing the Response Time Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments. ### Testing Various Materials for the Mount The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision.
partial
## Inspiration The idea for 🐌 Snail's Pace came halfway through the Hackathon, in response to a 🐌 bug that we had been struggling with during our initial project. Initially, we aimed to create an AR navigation app that centered around deploying AR beacons at the user's destination. However, we were unable to get our virtual beacon to stay in place; rather, it persistently followed the user's movements. Combining this with a shelved earlier idea we had for gamifying exercise, 🐌 Snail's Pace was born -- a game where you're constantly being followed around by a vaguely malicious 🐌 snail. ## What it does 🐌 Snail's Pace is an AR game that randomly generates a 3D 🐌 snail onto the users surroundings using their built-in camera. The 🐌 snail will slowly get closer, unabated by blocks and obstacles, and your goal is to avoid it at all costs! You gain points for avoiding it for longer periods of time, so make sure you stay as far away as possible! 🐌 Snail's Pace is more than just a fun game; it also encourages physical activity and promotes active lifestyles, in a similar way to other apps like Run, Zombies! Although the 🐌 snail is too slow to catch up to you while you're moving, watch out! It'll easily tag you if you're just lazing about! 🐌 Snail's Pace encourages you to keep moving and stay healthy. ## How it was built We created 🐌 Snail's Pace using the Unity development environment in conjunction with Unity's AR Foundations toolkit. We chose Unity Game Engine for its 3D capabilities and its aforementioned builtin AR development toolkit. Using the Unity editor, we created GameObjects and AR components, using C# scripting to add programmable functionality and logic to our game. For the bulk of the spawning and tracking aspects, we used the Unity AR+GPS Location, a third-party plugin designed for GPS integration. GitHub served as our platform for collaborative coding efforts and version control. ## Challenges 🐌 Snail's Pace's journey was fraught with challenges. In the beginning, becoming familiar with C# and Unity posed an early challenge, as most of the team was not familiar with either technology. Our first choice of framework, Google's ARCore Geospatial API, was curtailed due to the reliance of Google's deprecated Unity APIs. This prompted us to shift to AR+GPS and revert to an older version of AR Foundation, a decision that notably improved our project's viability. A surprisingly difficult issue was the implementation of snail pursuit. Using AR+GPS to track the snail in real time proved difficult due to the inaccuracies in GPS tracking. We ended up completely rewriting the pursuit system with Unity's in-game world coordinates instead, which finally achieved the effect we needed. ## Achievements We're quite proud of the fully integrated systems we ended up putting up in place for 🐌 Snail's Pace, especially considering how little time we had after pivoting quite late into the second day of the hackathon. 🐌 Snail's Pace also ended up being really fun to design, make, and play! We were really happy with how the gameplay turned out, with the 🐌 snail being a legitimately thrilling threat despite the silly basis. ## What we learned We learned a lot working on 🐌 Snail's Pace! We're now much stronger at C# and Unity, for sure. We also learned a good deal about Google APIs and how to utilize them (with a focus on ARCore). We also learned how to utilize GitHub with Unity, something that's been rather difficult in the past. Of course, we also learned a ton about AR in general, from AR Cameras to Lighting to Geospatial events and more! ## What's next? Like the land 🐌 snail Cepaea nemoralis, 🐌 Snail's Pace's future is diverse and bright! We're hoping to expand the number and type of "pursuers" we support, and add an in-game rewards system to facilitate the earning of those followers. We also hope to add leaderboard features, allowing those with impossibly high 🐌 snail evasion scores to brag like the champions they are! An exciting future prospect involves making the 🐌 snail tracking work without the app open, keeping those even out of game on their toes! Keep an eye(stalk) out for more from us in the world of moving molluscs!
## Inspiration We wanted to build something to support people in their fitness goals while making the experience fun and adventurous! The idea behind this app was to create a real-life side-scrolling game that would emulate the user’s every step which is reflected in the game. This would contribute to their overall well-being, and their progress can be shared amongst friends. ## What it does The user will be able to collect coins and complete quests by defeating monsters. This is achieved by prompting the user to take photos of real life objects that allow them to expose the monster’s weakness. For example, the fire dragon is defenseless against water based objects. Correct photo submissions of entities range from a variety of water based objects including water bottles, fire hydrants, and ice cubes. The Clarifai API will recognize these submissions and would then post a message on whether or not the user has successfully defeated the monster. ## How we built it We utilized Microsoft Azure to set up the database for our leaderboard and host the page that displays the game leaderboard. The Android app was built using Cordova and various plugins. We also used the Clarifai API to recognize objects from images taken to fight monsters. The entire application was written mainly in Javascript and a little bit of Java. ## Challenges we ran into We had a lot of difficulty trying to accurately detect the user’s movement via the accelerometer functionality on our phones. There wasn’t a reliable way to interpret the type of movement based solely on the accelerometer data due to noise generated by gravity that affected the readings. ## Accomplishments that we're proud of We've implemented an algorithm that detects user movements through the use of the accelerometer, as well as Google location services to reliably predict when the user is walking, running, or involved in some sort of physical movement. We are now also able to successfully detect any image that the user takes using their phone’s camera and match it to the keywords associated with each monster the user encounters within the game. ## What we learned We learnt how to utilize the Clarifai API to detect images and match them to specific keywords that we wanted. We also learnt how to use Microsoft Azure to host our leaderboard server. ## What's next for FitFrog We hope to integrate more active movements within the game such as jumps and leaps (pun intended). We also want to create more interesting quests that would involve different monsters to defeat. In addition to these, we hope to make the leaderboard more personalized to the user by connecting it to their Facebook page.
## Inspiration Have you ever been on a ferris wheel and wondering what each building in your sight is? Have you ever ran into a super cool event when traveling but you just can't figure out what it is about because every piece of information is written in French? Have you ever seen a beautiful architecture on your uber ride in a foreign country and just can't locate it on google map? Have you ever been to engineering building in UPenn and hope from the bottom of your heart that someone from last year's Pennapps has left some notes on the wall telling you where is a good place to take a nap? We have, so we designed ARound to solve all these above. ## What it does ARound provide a brand new experience exploring the world taking advantages of AR technology. Its fundamental function is telling you what's the name of each building in your sight and the real-time distance from it. And if you're interested, with a simple click, you'll be provided basic informations about the building provided by others who have been there. Furthermore, if you want to see events in this building recently, just swipe left. ## How we built it We develop ARound with ARkit, CoreLocation and MapKit so that we can link the object in AR world to its GPS coordinate. Besides, because CoreLocation provides a low accuracy of GPS location with a maximum of 100 meters difference, we use another library named ARCL to increase accuracy and better link AR and GPS data. ## Challenges we ran into This is the first time for all team members to develop an application using Swift and ARKit. It took us quite a while to get the gists of them. ## Accomplishments that we're proud of ## What we learned We learned swift and ARkit ## What's next for ARound With the functions of ARound and support of multiple major languages (soon to be developed), we expect ARound to be a strong aid for tourists. Additionally, we're open to accepting official comments from enterprise operators and event organizers to give more accurate information about buildings/events on the app and also potential to run advertisements.
losing
## Inspiration Have you ever wondered how cool it would be to have your own A.I. assistant? Imagine how easier it would be to send emails without typing a single word, doing Wikipedia searches without opening web browsers, and performing many other daily tasks like playing music with the help of a single voice command. In this tutorial, I will teach you how you can make your personal A.I. assistant using Python. Many people will argue that the virtual assistant that we have created is not an A.I, but it is the output of a bunch of the statement. But, if we look at the fundamental level, the sole purpose of A.I develop machines that can perform human tasks with the same effectiveness or even more effectively than humans. ## What it does What can this A.I. assistant do for you? It can send emails on your behalf. It can play music for you. It can do Wikipedia searches for you. It is capable of opening websites like Google, Youtube, etc., in a web browser. It is capable of opening your code editor or IDE with a single voice command. ## How I built it I built it using python and python modules I mainly coded this program in pycharm ## Challenges I ran into Many bugs was arrived in middle but finally i able to fix it ## Accomplishments that I'm proud of I'm proud of that this is my first AI program and through this program i am little bit informed abut AI ## What I learned I learned about Virtual assistant and how Alexa and siri works Through AI ## What's next for Virtual (Jarvis) AI Assistant In next I am making various project related to AI such that Health management system ,Restaurant management system ,Automatic bill genertaor etc..
## Inspiration We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output. ## What it does XTS takes an **X** and turns it **T**o **S**peech. ## How we built it We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program. ## Challenges we ran into While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions. ## Accomplishments that we're proud of We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program. ## What we learned We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python.
## Inspiration The inspiration behind it was my own experiences. Coming from a first-generation student family, I was always obligated to find any discounts or free opportunities to capitalize on. However, these were not always available easily off the internet. A tool like Froque could help many individuals questioning to spend on something slightly over budget, for it to now become more managable. ## What it does Essentially, any activities or discounts are written on the app and linked to to that website in order to register. ## How we built it Using SwiftUI on XCode for the GUI, along with Firebase for the Authentication and database storage in order to update the app in realtime, effectively. ## Challenges we ran into The biggest challenge would have been connecting Firebase to the app. This required quite a bit of scrolling from the documentation, with some outdated versions providing incorrect code. ## Accomplishments that we're proud of Being a solo group, this was my very first app on SwiftUI and I am extremely excited to share this with you. ## What we learned Linking Firebase, force unwrapping vs. using guard statements, safe coding practices, designing, ## What's next for Froque I have so many ideas planned for Forque. Hopefully, using an AI model, I could easily update the discounts/extra-curricular activities within the area a lot easier. Not only that, but tailoring specific activities based on location would be an amazing idea.
partial
## Inspiration While there are several applications that use OCR to read receipts, few take the leap towards informing consumers on their purchase decisions. We decided to capitalize on this gap: we currently provide information to customers about the healthiness of the food they purchase at grocery stores by analyzing receipts. In order to encourage healthy eating, we are also donating a portion of the total value of healthy food to a food-related non-profit charity in the United States or abroad. ## What it does Our application uses Optical Character Recognition (OCR) to capture items and their respective prices on scanned receipts. We then parse through these words and numbers using an advanced Natural Language Processing (NLP) algorithm to match grocery items with its nutritional values from a database. By analyzing the amount of calories, fats, saturates, sugars, and sodium in each of these grocery items, we determine if the food is relatively healthy or unhealthy. Then, we calculate the amount of money spent on healthy and unhealthy foods, and donate a portion of the total healthy values to a food-related charity. In the future, we plan to run analytics on receipts from other industries, including retail, clothing, wellness, and education to provide additional information on personal spending habits. ## How We Built It We use AWS Textract and Instabase API for OCR to analyze the words and prices in receipts. After parsing out the purchases and prices in Python, we used Levenshtein distance optimization for text classification to associate grocery purchases with nutritional information from an online database. Our algorithm utilizes Pandas to sort nutritional facts of food and determine if grocery items are healthy or unhealthy by calculating a “healthiness” factor based on calories, fats, saturates, sugars, and sodium. Ultimately, we output the amount of money spent in a given month on healthy and unhealthy food. ## Challenges We Ran Into Our product relies heavily on utilizing the capabilities of OCR APIs such as Instabase and AWS Textract to parse the receipts that we use as our dataset. While both of these APIs have been developed on finely-tuned algorithms, the accuracy of parsing from OCR was lower than desired due to abbreviations for items on receipts, brand names, and low resolution images. As a result, we were forced to dedicate a significant amount of time to augment abbreviations of words, and then match them to a large nutritional dataset. ## Accomplishments That We're Proud Of Project Horus has the capability to utilize powerful APIs from both Instabase or AWS to solve the complex OCR problem of receipt parsing. By diversifying our software, we were able to glean useful information and higher accuracy from both services to further strengthen the project itself, which leaves us with a unique dual capability. We are exceptionally satisfied with our solution’s food health classification. While our algorithm does not always identify the exact same food item on the receipt due to truncation and OCR inaccuracy, it still matches items to substitutes with similar nutritional information. ## What We Learned Through this project, the team gained experience with developing on APIS from Amazon Web Services. We found Amazon Textract extremely powerful and integral to our work of reading receipts. We were also exposed to the power of natural language processing, and its applications in bringing ML solutions to everyday life. Finally, we learned about combining multiple algorithms in a sequential order to solve complex problems. This placed an emphasis on modularity, communication, and documentation. ## The Future Of Project Horus We plan on using our application and algorithm to provide analytics on receipts from outside of the grocery industry, including the clothing, technology, wellness, education industries to improve spending decisions among the average consumers. Additionally, this technology can be applied to manage the finances of startups and analyze the spending of small businesses in their early stages. Finally, we can improve the individual components of our model to increase accuracy, particularly text classification.
Rapid Response aims to solve the inefficient method of locating the caller used today during 911 calls. Additionally, we aim to streamline the process of exchanging information from the user to dispatcher thus allowing for first responders to arrive in a more time efficient manner. Rapid Response utilizes the latitude, longitude, and altitude points of the phone and converts it to a street address which is then sent to the nearest dispatcher in the area along with the nature of the emergency. Furthermore, the user’s physical features are also sent to the dispatchers to help identify the victim of the incident as well the victim’s emergency contacts are also notified of the incident. With Rapid Response, victims of an incident are now able to get the help they need when they need it.
## Inspiration Every 2 in 5 Americans are classified as medically obese. Obesity is the leading cause of suffering in many developed countries, not because of the lack of healthy food available but more so due to the lack of information that many brands provide. In our quest to combat this ever-growing issue globally, we decided to make a nutrition app that not only helped people track their food but also proactively help people in making better choices. ## What it does Fiber uses your camera to scan barcodes that are located on the back of most products. Using the OpenFoodFacts API we access the biggest collection of nutrition information in current human history allowing us to intelligently identify the ingredients and allergens. We parse this information into the OpenAI API using industry-leading generative AI to inform the user of the pros and cons of the product. ## How we built it We created a mobile application front end using React-Native allowing for great cross-platform functionality. Accessing our backend written in Python with Flask, which provides an endpoint for searching the OpenFoodFacts database for product information. We further used OpenAI's GPT3.5 turbo model to summarize and list the benefits and disadvantages as a summary of the ingredients. ## Challenges we ran into Due to the millions of variations of mobile devices that are available globally, we ran into a few issues ensuring that Fiber ran well on a wide variety of devices. In addition, permissions for the camera and the barcode scanning functionality were a small hurdle. The biggest challenge we ran into was learning how to work together as a team, but a few hours in we got the hang of it, using tools on GitHub to optimize our ability to work together. ## Accomplishments that we're proud of We are proud to learn more about the various technologies involved and working with AI to push our purpose through an application. Through this, we were able to make learning more about various products quicker, easier, and more efficient. ## What we learned During the creation of the project, we learned a lot more about the pros and cons of different everyday products. It was surprising to see the various ingredients and potential hazards as well. After the creation of the project, our team was able to gain a deeper understanding of implementing AI into an application and revealed potential issues with everyday products. ## What's next for Fiber: Your AI Nutrition Companion In the future, we hope to add more features that will specify additional information about the product, including the nutrition facts and some potential ways that the product could be used. For example, some food products could include recipes on various nutritional dishes.
winning
## Inspiration In August, one of our team members was hit by a drunk driver. She survived with a few cuts and bruises, but unfortunately, there are many victims who are not as lucky. The emotional and physical trauma she and other drunk-driving victims experienced motivated us to try and create a solution in the problem space. Our team initially started brainstorming ideas to help victims of car accidents contact first response teams faster, but then we thought, what if we could find an innovative way to reduce the amount of victims? How could we help victims by preventing them from being victims in the first place, and ensuring the safety of drivers themselves? Despite current preventative methods, alcohol-related accidents still persist. According to the National Highway Traffic Safety Administration, in the United States, there is a death caused by motor vehicle crashes involving an alcohol-impaired driver every 50 minutes. The most common causes are rooted in failing to arrange for a designated driver, and drivers overestimating their sobriety. In order to combat these issues, we developed a hardware and software tool that can be integrated into motor vehicles. We took inspiration from the theme “Hack for a Night out”. While we know this theme usually means making the night out a better time in terms of fun, we thought that another aspect of nights out that could be improved is getting everyone home safe. Its no fun at all if people end up getting tickets, injured, or worse after a fun night out, and we’re hoping that our app will make getting home a safer more secure journey. ## What it does This tool saves lives. It passively senses the alcohol levels in a vehicle using a gas sensor that can be embedded into a car’s wheel or seat. Using this data, it discerns whether or not the driver is fit to drive and notifies them. If they should not be driving, the app immediately connects the driver to alternative options of getting home such as Lyft, emergency contacts, and professional driving services, and sends out the driver’s location. There are two thresholds from the sensor that are taken into account: no alcohol present and alcohol present. If there is no alcohol present, then the car functions normally. If there is alcohol present, the car immediately notifies the driver and provides the options listed above. Within the range between these two thresholds, our application uses car metrics and user data to determine whether the driver should pull over or not. In terms of user data, if the driver is under 21 based on configurations in the car such as teen mode, the app indicates that the driver should pull over. If the user is over 21, the app will notify if there is reckless driving detected, which is based on car speed, the presence of a seatbelt, and the brake pedal position. ## How we built it Hardware Materials: * Arduino uno * Wires * Grove alcohol sensor * HC-05 bluetooth module * USB 2.0 b-a * Hand sanitizer (ethyl alcohol) Software Materials: * Android studio * Arduino IDE * General Motors Info3 API * Lyft API * FireBase ## Challenges we ran into Some of the biggest challenges we ran into involved Android Studio. Fundamentally, testing the app on an emulator limited our ability to test things, with emulator incompatibilities causing a lot of issues. Fundamental problems such as lack of bluetooth also hindered our ability to work and prevented testing of some of the core functionality. In order to test erratic driving behavior on a road, we wanted to track a driver’s ‘Yaw Rate’ and ‘Wheel Angle’, however, these parameters were not available to emulate on the Mock Vehicle simulator app. We also had issues picking up Android Studio for members of the team new to Android, as the software, while powerful, is not the easiest for beginners to learn. This led to a lot of time being used to spin up and just get familiar with the platform. Finally, we had several issues dealing with the hardware aspect of things, with the arduino platform being very finicky and often crashing due to various incompatible sensors, and sometimes just on its own regard. ## Accomplishments that we're proud of We managed to get the core technical functionality of our project working, including a working alcohol air sensor, and the ability to pull low level information about the movement of the car to make an algorithmic decision as to how the driver was driving. We were also able to wirelessly link the data from the arduino platform onto the android application. ## What we learned * Learn to adapt quickly and don’t get stuck for too long * Always have a backup plan ## What's next for Drink+Dryve * Minimize hardware to create a compact design for the alcohol sensor, built to be placed inconspicuously on the steering wheel * Testing on actual car to simulate real driving circumstances (under controlled conditions), to get parameter data like ‘Yaw Rate’ and ‘Wheel Angle’, test screen prompts on car display (emulator did not have this feature so we mimicked it on our phones), and connecting directly to the Bluetooth feature of the car (a separate apk would need to be side-loaded onto the car or some wi-fi connection would need to be created because the car functionality does not allow non-phone Bluetooth devices to be detected) * Other features: Add direct payment using service such as Plaid, facial authentication; use Docusign to share incidents with a driver’s insurance company to review any incidents of erratic/drunk-driving * Our key priority is making sure the driver is no longer in a compromising position to hurt other drivers and is no longer a danger to themselves. We want to integrate more mixed mobility options, such as designated driver services such as Dryver that would allow users to have more options to get home outside of just ride share services, and we would want to include a service such as Plaid to allow for driver payment information to be transmitted securely. We would also like to examine a driver’s behavior over a longer period of time, and collect relevant data to develop a machine learning model that would be able to indicate if the driver is drunk driving more accurately. Prior studies have shown that logistic regression, SVM, decision trees can be utilized to report drunk driving with 80% accuracy.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
winning
## Inspiration Internet Addiction, while not yet codified within a psychological framework, is growing both in prevalence as a potentially problematic condition with many parallels to existing recognized disorders. How much time do you spend on your phone a day? On your laptop? Using the internet? What fraction of that time is used doing things that are actually productive? Over the past years, there is seen to be a stronger and stronger link between an increasing online presence and a deteriorating mental health. We spend more time online than we do taking care of ourselves and our mental/emotional wellbeing. However, people are becoming more aware of their own mental health, more open to sharing their struggles and dealing with them. However the distractions of social media, games, and scrolling are constantly undermining our efforts. Even with an understanding of the harmful nature of these technologies, we still find it so difficult to take our attention away from them. Much of the media we consume online is built to be addicting, to hook us in. How can we pull ourselves away from these distractions in a way that doesn’t feel punishing? ## What it does Presents an audio visual stimulation that allows the user to become more aware of their “mind space” Lo fi audio and slow moving figures A timer that resets everytime keyboard or mouse is moved Program awards the player a new plant for every time the timer comes to an end ## How we built it UI/UX design using sigma Frontend using HTML, JS, and CSS AWS Amplify for deploying our webapp Github for version control ## Challenges we ran into Initially wanted to use react.js and develop our server backend ourselves but since we are inexperienced, we had to scale back our goals. Team consisted mostly of people with backend experience, it was difficult to convert to front-end. Furthermore, Most of our members were participating in their first hackathon. ## Accomplishments that we're proud of We're very proud of learning how to work together as a team, and manage projects in github for the first time. We're proud of having an end product, even though it didn't fully meet our expectations. We're happy to have this experience, and can't wait to participate in more hackathons in the future. ## What we learned We developed a lot of web development skills, specifically with javascript, as most of our member have never used it in the past. We also learned a lot about AWS. We're all very excited about how we can leverage AWS to develop more serverless web applications in the future. ## What's next for Mind-Space We want to develop Mind-Space to play more like an idle game, where the user can choose their choice of relaxing music, or guided meditation. As the player spends more time in their space, different plants will grow, some animals will be introduced, and eventually what started as just one sprout will become a whole living ecosystem. We want to add social features, where players can add friends and visit each other's Mind-Space, leveraging AWS Lambda and MongoDB to achieve this.
## Inspiration Our inspiration comes from many of our own experiences with dealing with mental health and self-care, as well as from those around us. We know what it's like to lose track of self-care, especially in our current environment, and wanted to create a digital companion that could help us in our journey of understanding our thoughts and feelings. We were inspired to create an easily accessible space where users could feel safe in confiding in their mood and check-in to see how they're feeling, but also receive encouraging messages throughout the day. ## What it does Carepanion allows users an easily accessible space to check-in on their own wellbeing and gently brings awareness to self-care activities using encouraging push notifications. With Carepanion, users are able to check-in with their personal companion and log their wellbeing and self-care for the day, such as their mood, water and medication consumption, amount of exercise and amount of sleep. Users are also able to view their activity for each day and visualize the different states of their wellbeing during different periods of time. Because it is especially easy for people to neglect their own basic needs when going through a difficult time, Carepanion sends periodic notifications to the user with messages of encouragement and assurance as well as gentle reminders for the user to take care of themselves and to check-in. ## How we built it We built our project through the collective use of Figma, React Native, Expo and Git. We first used Figma to prototype and wireframe our application. We then developed our project in Javascript using React Native and the Expo platform. For version control we used Git and Github. ## Challenges we ran into Some challenges we ran into included transferring our React knowledge into React Native knowledge, as well as handling package managers with Node.js. With most of our team having working knowledge of React.js but being completely new to React Native, we found that while some of the features of React were easily interchangeable with React Native, some features were not, and we had a tricky time figuring out which ones did and didn't. One example of this is passing props; we spent a lot of time researching ways to pass props in React Native. We also had difficult time in resolving the package files in our application using Node.js, as our team members all used different versions of Node. This meant that some packages were not compatible with certain versions of Node, and some members had difficulty installing specific packages in the application. Luckily, we figured out that if we all upgraded our versions, we were able to successfully install everything. Ultimately, we were able to overcome our challenges and learn a lot from the experience. ## Accomplishments that we're proud of Our team is proud of the fact that we were able to produce an application from ground up, from the design process to a working prototype. We are excited that we got to learn a new style of development, as most of us were new to mobile development. We are also proud that we were able to pick up a new framework, React Native & Expo, and create an application from it, despite not having previous experience. ## What we learned Most of our team was new to React Native, mobile development, as well as UI/UX design. We wanted to challenge ourselves by creating a functioning mobile app from beginning to end, starting with the UI/UX design and finishing with a full-fledged application. During this process, we learned a lot about the design and development process, as well as our capabilities in creating an application within a short time frame. We began by learning how to use Figma to develop design prototypes that would later help us in determining the overall look and feel of our app, as well as the different screens the user would experience and the components that they would have to interact with. We learned about UX, and how to design a flow that would give the user the smoothest experience. Then, we learned how basics of React Native, and integrated our knowledge of React into the learning process. We were able to pick it up quickly, and use the framework in conjunction with Expo (a platform for creating mobile apps) to create a working prototype of our idea. ## What's next for Carepanion While we were nearing the end of work on this project during the allotted hackathon time, we thought of several ways we could expand and add to Carepanion that we did not have enough time to get to. In the future, we plan on continuing to develop the UI and functionality, ideas include customizable check-in and calendar options, expanding the bank of messages and notifications, personalizing the messages further, and allowing for customization of the colours of the app for a more visually pleasing and calming experience for users.
## Inspiration We wanted to create a convenient, modernized journaling application with methods and components that are backed by science. Our spin on the readily available journal logging application is our take on the idea of awareness itself. What does it mean to be aware? What form or shape can mental health awareness come in? These were the key questions that we were curious about exploring, and we wanted to integrate this idea of awareness into our application. The “awareness” approach of the journal functions by providing users with the tools to track and analyze their moods and thoughts, as well as allowing them to engage with the visualizations of the journal entries to foster meaningful reflections. ## What it does Our product provides a user-friendly platform for logging and recording journal entries and incorporates natural language processing (NLP) to conduct sentiment analysis. Users will be able to see generated insights from their journal entries, such as how their sentiments have changed over time. ## How we built it Our front-end is powered by the ReactJS library, while our backend is powered by ExpressJS. Our sentiment analyzer was integrated with our NodeJS backend, which is also connected to a MySQL database. ## Challenges we ran into Creating this app idea under such a short period of time proved to be more challenge than we anticipated. Our product was meant to comprise of more features that helped the journaling aspect of the app as well as the mood tracking aspect of the app. We had planned on showcasing an aggregation of the user's mood over different time periods, for instance, daily, weekly, monthly, etc. And on top of that, we had initially planned on deploying our web app on a remote hosting server but due to the time constraint, we had decided to reduce our proof-of-concept to the most essential cores features for our idea. ## Accomplishments that we're proud of Designing and building such an amazing web app has been a wonderful experience. To think that we created a web app that could potentially be used by individuals all over the world and could help them keep track of their mental health has been such a proud moment. It really embraces the essence of a hackathon in its entirety. And this accomplishment has been a moment that our team can proud of. The animation video is an added bonus, visual presentations have a way of captivating an audience. ## What we learned By going through the whole cycle of app development, we learned how one single part does not comprise the whole. What we mean is that designing an app is more than just coding it, the real work starts in showcasing the idea to others. In addition to that, we learned the importance of a clear roadmap for approaching issues (for example, coming up with an idea) and that complicated problems do not require complicated solutions, for instance, our app in simplicity allows for users to engage in a journal activity and to keep track of their moods over time. And most importantly, we learned how the simplest of ideas can be the most useful if they are thought right. ## What's next for Mood for Thought Making a mobile app could have been better, given that it would align with our goals of making journaling as easy as possible. Users could also retain a degree of functionality offline. This could have also enabled a notification feature that would encourage healthy habits. More sophisticated machine learning would have the potential to greatly improve the functionality of our app. Right now, simply determining either positive/negative sentiment could be a bit vague. Adding recommendations on good journaling practices could have been an excellent addition to the project. These recommendations could be based on further sentiment analysis via NLP.
partial
### Simple QR Code Bill Payment #### nwHacks 2020 Hackathon Project #### Main repository for the rapidserve application ### Useful Links * [Github](https://github.com/rossmojgani/rapidserve) * [DevPost](https://devpost.com/software/rapidserve-g1skzh) ### Team Members * Ross Mojgani * Dryden Wiebe * Victor Parangue * Aric Wolstenholme ### Description RapidServe is a mobile application which allows restaurants to charge their customers through a mobile application interface. Powered with a React Native frontend and Python Flask API server with a mongoDB database, RapidServe uses QR codes linked to tables to allow the customer to scan the QR code at their table and pay for any item at their table. Once all the items at the customers table are paid for, the customer is free to go and the waiter/waitress does not need to be bothered and wait for each customer at a table to pay individually. ### Technical Details * Frontend Mobile Application **(React Native)** + The frontend was implemented using React Native, there is a landing page where the user can register or log in, using a facebook integration to link their facebook account. + While creating an account, if the user is a waiter/waitress, they are prompted to enter their restaurant ID, along with entering their username/password combination. If the user is a customer, they will just be prompted for a username/password combination. + The page which comes up next is a page to scan a QR code which corresponds to the table which the waiter/waitress is serving or the customer is sitting at, the customer will be able to see which items have been charged to their table and pay for whichever items they need to. The waiter/waitress will be allowed to add items to the table they are serving. + The user can pay for their items and the waiter/waitress can see if the table has been paid for and know the customers are good to go. * API Details **(Flask/Python API)** + The API for this application was implemented using the flask framework along with Python, there was documentation which the frontend used to make their HTTP requests, [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md), this API document was the contract between the frontend and the backend in terms of what arguments were sent into what type of HTTP requests. The API was hosted on a virtual machine in the cloud. + The API queried our mongoDB database based on which requests were being processed, which was also hosted on a virtual machine on the cloud, more below. * Database Details **(MongoDB)** + The database used was mongoDB, which was queried from the Flask/Python server using PyMongo and Flask\_PyMongo, we used two collections mainly, **users, and orders** which stored objects based on what a user needed to have stored (see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/API.md) for a user object example) and for what a tables order would be (again, see [API DOCUMENTATION](https://github.com/rossmojgani/rapidserve/blob/master/backend/API.md) for a table object example)
## Inspiration We were inspired by the Interac API, because of how simple it made money requests. We all realized that one thing we struggle with sometimes is splitting the bill, as sometimes restaurants don't accommodate for larger parties. ## What it does Our simple web app allows for you to upload your receipt, and digitally invoice your friends for their meals. ## How we built it For processing the receipts, we used Google Cloud's Vision API, which is a machine learning application for recognizing and converting images of characters into digital text. We used HTML, CSS, JavaScript, and JQuery to create an easy-to-use and intuitive interface that makes splitting the bill as easy as ever. Behind the scenes, we used Flask and developed Python scripts to process the data entered by the users and to facilitate their movement through our interface. We used the Interac e-Transfer API to send payment requests to the user's contacts. These requests can be fulfilled and the payments will be automatically deposited into the user's bank account. ## Challenges we ran into The Optical Character Recognition (OCR) API does not handle receipts format very well. The item names and cost are read in different orders, do not always come out in pairs, and have no characters that separate the items. Therefore we needed to develop an algorithm that can pick up the separate the words and recognize which characters were actually useful. The INTERAC e-Transfer API example was given to us as an React app. Most of us have had no experience with React before. We needed to find a way to still be able to call the API and integrate the caller with the rest of the web app, which was build with HTML, CSS, and Javascript. There has also been a few difficulties with passing data from the front end interface and the back end service routines. ## Accomplishments that we're proud of It's the first hackathon for two of our team members, and it was a fresh experience for us to work on a project in 24 hours. We had little to no experience with full stack development and Google Cloud Platform tools. However, we figured out our way step by step, with help from the mentors and online resources. We managed to integrate a few APIs into this project and tied together the front end and back end designs into a functional web app. ## What we learned How to call Google Cloud APIs How to host a website on Google Cloud Platform How to set up an HTTP request in various languages How to make dynamically interactive web page How to handle front end and back end requests ## What's next for shareceipt We hope to take shareceipt to the next level by filling in all the places in which we did not have enough time to fully explore due to the nature of a hackathon. In the future, we could add mobile support, Facebook & other social media integration to expand our user-base and allow many more users to enjoy a simple way to dine out with friends.
## Inspiration Once you finish eating with a group of friends, especially when the group is large and the restaurant can't split the bill, it can be hard to figure out how much each person pays for the meal. We wanted to create an app that would streamline the bill splitting process and ensure everyone paid the proper amount. ## What it does and how we built it The app takes a picture of the receipt and uploads the picture to an AWS S3 Bucket and uses AWS TextractTM to get the text off of the receipts. It uses this text in an AWS Lambda function to identify which items are ordered items and their associated price. These items and prices are stored in a database which also stores the names of the people associated with each item and an id value for each entry. This database is then accessed by our app through an API call, which then displays the item name, price, and people associated in a table. These databases are run on an AWS EC2 instance that was created with Elastic Beanstalk. From this page, the user can change which people are associated with which items. These updates also push to the database. Once the selections are confirmed by the user, the app will display a final screen which displays the items and total charge for each person. Although currently not possible due to Venmo having a closed API, we originally intended for these final charges to connect to Venmo to charge each user. ## Challenges we ran into We ran into a lot of issues running the AWS Lambda code such that it could access the AWS S3 Bucket we set up. None of our team members had used AWS in any capacity before, so all of the necessary steps with IBM users, permissions, and other aspects along that vein were hard to figure out on a time crunch. Additionally, the front end of the app ended up being much harder to develop than we initially expected. Much like AWS, none of us had coded an app before and we decided to use React with Expo. However, this created a lot of weird syntax errors we had never seen before and led to us being unable to create relatively simple page structures (namely a table) that were necessary for our app. Though we were able to succesfully take a picture with a camera, we were unable to fix the table based front-end issues in time to connect the front and the back-end, so although there is a working back-end and usable front-end the two are not yet connected. ## Accomplishments that we're proud of We're very proud of the progress made with the various AWS features we used and the databases. We went from not knowing anything about AWS to successfully using Python code to upload multiple images to the bucket, analyze and parsing them with the AWS Lambda function, and uploading the data to our database. ## What we learned We learned a lot about how AWS, especially Textract, Lambda Functions, and S3 Buckets work. Additionally, we learned how to create a Flask database and REST API. We also learned that App Development was a lot harder than we expected, as we thought that previous coding knowledge would transfer over a lot better. ## What's next for Moven We hope to learn how to properly implement the front end of our app to be hooked into the back-end. The camera function, though somewhat functional, isn't at the level or as aesthetically nice as we hoped it would be. In addition to this, there are the poorly implemented pages with the receipt items and the final costs for every person that we intend to finish after this Hackathon. After those pages are done, we hope to figure out a way to get access to Venmo APIs or otherwise implement Venmo within the app to allow the final charges for everyone to be directly ported to Venmo.
partial
## Inspiration Product managers and startup founders rely heavily on qualitative, not quantitative data, to make product decisions. Their primary touchpoints with customers is user interviews. However, user interviews are useless if each lives in a siloed document. How can we assist product managers and startup founders in create a coherent body of knowledge from their user interactions? ## What it does From a note, the users can choose a specific paragraph of note and query for semantically similar information across the database with one shortcut. The query sidebar will return not only the relevant notes, but also point out exactly which line/paragraph of these notes that are relevant to the query. ## How we built it * User interface: Next.js with Typescript * Backend API: Python Flask * Database: Pinecone (vector database) and Firebase We store the original documents in Firebase Firestore, but we also dissect our notes into paragraphs, embed them into vectors, and store the vectors into Pinecone for the purpose of semantic search. ## Challenges we ran into Solidify the idea of the project was the most difficult part we ran into because there were various ways we could go about the solving the problem. This led in miscommunication in development that hindered our progress. ## Accomplishments that we're proud of Despite not having much experience in full-stack development or AI, we still completed the project and learned many valuable skills along the way. ## What's next for NoteFusion Given more time, we would like to enhance our product's feature to even give a detailed analysis on the content highlighted, basing on past notes, suggesting personalized approaches for notetakers. This app uses generative AI and personalization to enhance your note-taking experience. It improves the quality of your notes by clarifying any gibberish and incorporates easy commands without relying on other sources like MLA, LaTeX, and more. This transforms your disorganized notes into easy-to-understand and personalized content, tailored to your learning needs.
## Inspiration Beautiful notes that are simple to read can make studying easier and better. We want to write notes that don't require us flipping through hundreds of pages for definitions or spending hours trying to format them to look nice. So we decided to create a note-taking application that does that for us. ## What it does * If you define a definition (ex: <Velocity | Velocity is the vector of speed >), the application will remember your definition and show a tool tip with your definition if you hover over any instance of the word you defined. (ex: if you defined *Velocity* and used the word *Velocity* later in your notes, you can see the definition for velocity by hovering over that word). You can also add examples to go along with your definitions. * These examples can be written in LaTeX and will be formatted accordingly. * Automatically formats your notes and definitions to make them nice to read. * Prepares them for print form (like LaTeX!) And ALL you need to do is type normally like you would in Microsoft Word or Google Docs. ## How we built it We used React to build our application. ## Challenges we ran into Parsing the text and formatting them properly in the output window was particularly challenging. ## Accomplishments that we're proud of * This entire project * Getting the text parser to work properly ## What we learned * How to delegate roles and allocate time for tasks for a hackathon. * Most of our team learned how to properly use GitHub. ## What's next for Context * Additional text formatting features like Microsoft Word (options to change font, etc.)
## Inspiration We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing. ## What it does We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery! ## How we built it We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms. ## Challenges we ran into We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept. ..also sleep 🥲 ## Accomplishments that we're proud of We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project. ## What we learned 3 of us have never been to a hackathon before! 3 of us never used Flask before! All of us have never worked together before! From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER). ## What's Next for Handwriting Teacher Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress.
losing
## Inspiration Our conviction that technology has the capacity to enhance our lives serves as our inspiration. In the modern world, where health is mostly ignored, we feel obligated to adapt traditional technologies and smartwatches into personal health coaches. We aim to use technology to provide individuals with the skills and knowledge they need to live longer, healthier lives. 'Health is Wealth' serves as the project's major driving principle. The goal of this effort is to save lives and enhance human health. It goes beyond data technology. It demonstrates how technology can help us live healthier and happier lives. ## What it does Our health-focused system is an improved human technology and a deep commitment to improving individuals' well-being. The system is built upon a solid foundation, with MongoDB handling backend data storage. All user health and fitness data are securely stored and managed using Terra API. This ensures that the information remains easily accessible and reliably protected. Users can interact with the system seamlessly, making it accessible to individuals of all technical backgrounds. The heart of our system lies in the smartwatch, which serves as a constant companion for users. This device is equipped with sensors and features that monitor various health metrics, including heart rate, physical activity, sleep patterns, and more. Users receive valuable recommendations for improving their fitness and overall well-being. Whether it's suggesting a short walk, a breathing exercise, or reminding them to stay hydrated, the system acts as a proactive guide on the path to better health. ## How we built it Creating our project was a team effort that involved dividing our tasks to get things done effectively. Some of us focused on the front end, which is the part you see and interact with, while others tackled the back end, handling data storage and processing. We also spent time understanding how Terra API worked; it was like unlocking a puzzle. By experimenting with dummy data and webhooks, we pieced together the API's workings. At the same time, we gathered in brainstorming sessions, throwing around new ideas to make our project even better. This collaborative process made sure we could build a web app that connects to Terra API, fetches your health data from various apps, and stores it safely in MongoDB. It's all part of our mission to provide an accessible and user-friendly solution for health and fitness tracking. ## Challenges we ran into Our journey was a bit like solving a tricky puzzle. We had some difficulties understanding how Terra API worked, and this was a big hurdle. Even though Terra API is quite useful, we faced some problems on their side, and we even found a few bugs in the system. Just like any project, we ran into some hiccups and errors while running the program. But here's the good part: despite these challenges, our team didn't give up. We worked together to figure things out and build a strong system that helps people with their health and fitness. ## What we learned Our project journey was a valuable learning experience. We discovered the importance of persistence and teamwork when dealing with complex systems like Terra API. Figuring out how to make technology work for the benefit of everyone was a lesson in patience and problem-solving. We also gained insights into the importance of testing and quality control, as uncovering bugs and errors taught us the value of attention to detail. Moreover, this project reinforced the idea that innovation thrives when individuals with diverse skills come together to tackle real-world challenges. The journey was not just about creating a web app but about fostering a deeper understanding of how technology can improve lives, and the potential for future endeavors is brighter than ever. ## What's next for Fitlife Looking ahead, FitVibes has a promising future, rooted in the lessons from our journey. We see FitVibes growing into a comprehensive health and fitness ecosystem, delivering personalized health recommendations and offering a holistic view of users' well-being. The platform's potential to integrate with a wide range of health and fitness apps, promote community and telehealth, contribute to health science, and expand internationally makes it a catalyst for positive change. FitVibes aims to become a one-stop destination for health and fitness, where users can not only access data-driven insights but also interact with a supportive community, seek professional guidance, and engage in global health research efforts. The future of FitVibes is characterized by innovation, inclusivity, and a strong commitment to empowering individuals on their journey toward better health and well-being.
## Inspiration Do you ever notice that when you promise someone else something, you do your absolute best to make sure you keep it? But when it comes to making promises to yourself, you break them left and right. We were inspired by this idea from our lack of motivation to get up and go to the gym in the mornings as we promise ourselves. We incentivize you to keep the promises to yourself starting with the gym. You tell us on when and when you plan on going to the gym and put money on the line for charity if you fail to do so. From there if you keep your promise, we seamlessly send the money back into your account. If you fail to do so, the money gets donated to a charity of your choice. ## How we built it We created this app on IOS using Swift UI and Cockroach DB as our database. Additionally, we used Solana to implement our smart contracts. We use IOS's location feature to verify that the user is at the location that they selected during the time they promised to be there. ## Challenges we ran into Foolishly, we decided to build an IOS app without having had any IOS experience which prompted us to run into a lot of challenges and a lot of online tutorials. ## Accomplishments that we're proud of We are proud that we were able to build a minimal viable product within 24 hours with only 3 laptops due to technical issues. We are also proud of the fact that we were able to build an IOS app without any previous experience using Swift UI. ## What we learned We learned an idea may seem simple, but once you start to implement the features, a lot more things pop up that you may have not anticipated. This set us back a little, but by sacrificing sleep, we were still able to get out product running at the last minute. ## What's next for PromiseJar What's next? Well currently PromiseJar is limited to only keeping Gym promises. We plan to expand on this to allow any type of commitment. Additionally, we plan to bring a social feature to the app by allowing people to create groups and compete against each other while making money for charity.
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
losing
Traveling is exciting - planning, not so much. We thought about different ways to improve the vacation search process and found that visuals were key in selecting the perfect location. Because of this, we created TravelAR, an augmented reality app that allows you to physically step into scenes of different cities, then find flight information if you have found your ideal travel destination. ## What it does TravelAR is an iOS travel application built using Apple's ARKit. On opening the app, there is a camera view in the room. Upon tapping, there will be an augmented reality "portal" to another city, where you physically walk inside another "room" and view a gallery of AR scenes from the city. If interested in travel information to the city, there is a pull-up information section where a user can find relevant flight details and prices. ## How we built it We built TravelAR with Apple's iOS ARKit, a Flask server hosted with Microsoft Azure, and many APIs including the Amadeus Travel APIs, the Microsoft Bing Image Search API, and the WolframAlpha Simple API. The iOS application submits a “GET” request to our Flask server hosted in the cloud with Microsoft Azure. This Flask server takes in a city/location name, and it processes that string with many APIs to extract information—starting with the Amadeus Travel APIs. We hit the Amadeus endpoints with our location to gather information on popular attractions nearby, flight statistics, and other general travel information. We then port the “popular attractions” into the Microsoft Bing Image Search API to get a list of image urls that will be displayed in the iOS application. Furthermore, we use the WolframAlpha API to get information on the population. We combine all of the information with the AR to create a comprehensive visual display with helpful information. ## What's next for TravelAR The future vision for TravelAR is creating 3d scenes that are almost indistinguishable from reality. Imagine stepping into the "Paris" portal and being able to view in 3d detail all things around you, the interactions of the community, and experience all of the tourist attractions - right from your home. We would also want to expand this experience by making it social. It would be great to see which of your friends have traveled to a particular location in the past and also to take inspiration from other people's travel experiences.
## Inspiration People struggle to work effectively in a home environment, so we were looking for ways to make it more engaging. Our team came up with the idea for InspireAR because we wanted to design a web app that could motivate remote workers be more organized in a fun and interesting way. Augmented reality seemed very fascinating to us, so we came up with the idea of InspireAR. ## What it does InspireAR consists of the website, as well as a companion app. The website allows users to set daily goals at the start of the day. Upon completing all of their goals, the user is rewarded with a 3-D object that they can view immediately using their smartphone camera. The user can additionally combine their earned models within the companion app. The app allows the user to manipulate the objects they have earned within their home using AR technology. This means that as the user completes goals, they can build their dream office within their home using our app and AR functionality. ## How we built it Our website is implemented using the Django web framework. The companion app is implemented using Unity and Xcode. The AR models come from echoAR. Languages used throughout the whole project consist of Python, HTML, CSS, C#, Swift and JavaScript. ## Challenges we ran into Our team faced multiple challenges, as it is our first time ever building a website. Our team also lacked experience in the creation of back end relational databases and in Unity. In particular, we struggled with orienting the AR models within our app. Additionally, we spent a lot of time brainstorming different possibilities for user authentication. ## Accomplishments that we're proud of We are proud with our finished product, however the website is the strongest component. We were able to create an aesthetically pleasing , bug free interface in a short period of time and without prior experience. We are also satisfied with our ability to integrate echoAR models into our project. ## What we learned As a team, we learned a lot during this project. Not only did we learn the basics of Django, Unity, and databases, we also learned how to divide tasks efficiently and work together. ## What's next for InspireAR The first step would be increasing the number and variety of models to give the user more freedom with the type of space they construct. We have also thought about expanding into the VR world using products such as Google Cardboard, and other accessories. This would give the user more freedom to explore more interesting locations other than just their living room.
## Inspiration A photo album. A book bound by our memories, each page a chapter in the story of our lives. Welcome to TimeFly, where we've reinvented the photo album for the digital age. With our cross-platform app, capture daily videos, photos, or audio recordings, each tagged with its location using a Google Map API. Your memories come to life in a dynamic digital photo album that evolves with you. Immerse yourself in the sights, sounds, and places that shape your story, all easily navigated on a map. TimeFly: preserving life's precious moments. ## What it does TimeFly captures your daily memories in a vintage photo album. 1. **Capture the Essence of Every Day:** With Timefly, you can effortlessly record daily snippets of your life. Whether it's a video capturing a special moment, a photo freezing time, or an audio recording preserving the ambient sounds, Timefly empowers you to encapsulate the essence of each day! 2. **Dynamic Digital Photo Album:** Say goodbye to static photo albums. Timely transforms your memories into a dynamic digital experience. The cross-platform app, built with Flutter, ensures a consistent and visually pleasing interface across Android and iOS devices. Your memories come to life with fluidity and grace, making the journey through your digital album a delightful experience. 3. **Location Tagging and Mapping:** Each captured moment is tagged with its location, allowing you to trace your life's journey on a map. The integration of a Google Map API provides a comprehensive view of where each memory was made. 4. **Friends Feature** Using the Cohere API, we seamlessly match you to friends based on keywords from your location activity. Using LLM's, we reccomend friends for you to add that went to similar locations as you, ## How we built it We created an interactive mock-up for prototyping using Figma. We utilized the Flutter framework and Dart to build a cross-platform mobile app. For the server side, we utilized Auth0, Cohere API, and Google Maps API. For our login and sign in pages, we used the platform Auth0 for user authentication (such as socials login). Using the Cohere API, we used LLM's to identify similarities between different users, creating a robust friend reccomendation system. We intended to use a Google Maps API for location tagging based on the user's captures. To handle queries and requests from the client side, we used the Flask framework. ## Challenges we ran into As beginner hackers and our first time competing in an in-person hackathon, we faced major hurdles in this experince. However, we used these challenges as a learning experince! Firstly, we did not have prior experince in mobile development. We faced a steep learning curve while learning Flutter. From understanding widget-based UI development to navigating through the Dart programming language, each member of our team had to adapt to a fresh set of concepts and tools. Secondly, we faced issues using Git to tarck changes in code from different teammates. With limited prior experince, we faced issues ensuring smooth branching and merging. However, we overcame these issues by adopting Git best practices in our team. ## Accomplishments that we're proud of To begin, we are proud of building a fully interactive and responsive Figma mockup. Next, We learned how to use Flutter and Dart to build a basic version of our app. ## What we learned During this hackathon, we learned how to create responsive prototypes and gained experince in Flutter and using various APIs and frameworks. Reflecting on our team, we also learned how to work together and collaborate effectively, as we did not know each other before the hackathon. ## What's next for TimeFly In the future, we would like to fully implement the Google Maps API in our back-end and develop the Front-end to resemble our Figma mock-up. TimeFly has endless possibilities - in the future of AR/VR, we hope to allow users to physically experince life's little moments through Digital Twins. Digital Twins act as immersive time capsules that recreate the sights, sounds, and emotions of specific moments, allowing users to relive and physically experience them in a virtual space. We hope to continue developing this project and soon expand it's horizon in the field of AR/VR.
partial
## Inspiration Why not? We wanted to try something new. Lip reading and the Symphonic Labs API was something we hadn't seen before. **We wanted to see how far we could push it!** ## What it does Like singing? LLR is for you! Don’t like singing? LLR IS STILL FOR YOU! Fight for the top position in a HackTheNorth lip-syncing challenge. How does it work? Very simple, very demure: 1. Choose a song. Harder songs -> more points. 2. Lip sync to a 10-15 second clip of the song. (Don’t mumble!) 3. LLR reads your lips to determine your skill of lip-syncing to popular songs! 4. Scan your HTN QR code to submit your score and watch as you rise in ranking. ## How we built it LLR is a web app built with Next.js, enabling the rapid development of reactive apps with backend capability. The Symphonic API is at the core of LLR, powering the translation from lip movement to text. We’re using OpenAI’s embedding models to determine the lip sync’s accuracy/similarity and MongoDB as the database for score data. ## Challenges we ran into While the Symphonic API is super cool, we found it slow sometimes. We found that a 10-second video took around 5 seconds to upload and 30 seconds to translate. This just wasn’t fast enough for users to get immediate feedback. We looked at Symphonic Lab’s demo of Mamo, and it was much faster. We delved deeper into Mamo’s network traffic, we found that it used a much faster web socket API. By figuring out the specifications of this newfound API, we lowered our latency from 30 seconds to 7 seconds with the same 10-second clip. ## Accomplishments that we're proud of The friends we made along the way. ## What we learned Over the course of this hackathon, we learned from workshops and our fellow hackers. We learned how to quickly create an adaptive front end from the RWD workshop and were taught how to use network inspection to reverse engineer API processes. ## What's next for LipsLips Revolution We hope to integrate with the Spotify API or other music services to offer a larger variety of songs. We also wish to add a penalty system based on the amount of noise made. It is, after all, lip-syncing and not just singing. We do hope to turn this into a mobile app! It’ll be the next TikTok, trust…
## Inspiration Some members of our team have solo travelled cities like Toronto, Montreal, Paris, Munich, Vienna... Even after fully preparing for unsafe situations, they have still encountered many moments of uncertainty leading to anxiety in their travels. Other members walk home on a daily basis and like many of our peers, have a certain degree of anxiety for their own safety. Whether you're a traveller, commuter, student, or city-person, navigating has become increasingly filled with anxiety due to the world becoming increasingly unsafe. Toronto's crime rate has shown a 19.4% increase, Vancouver's 36.1%, Montreal's 27.1%, and the United States shows similar growth rates (Numbeo). As such, our project is inspired by the increasing demand for ensuring individual safety in cities. However, after hearing about multiple stabbing events that occurred at Waterloo, we have decided to action and bring the project to life through Hack the North. As RedFlagers, We believe that there are people out there who are suffering the same problem we are going through, and we are committed to protecting everyone through our own effort. ## Our Mission At RedFlags, our priority is to protect you, your family, and your community from any dangerous events. Whether you’re going to work in the morning, traveling with friends, or simply walking from place to place, RedFlag will alert you of nearby incidents and allow you to take timely action! As RedFlagers, our mission is to be the invisible bodyguard behind everyone, and ultimately, make the world a safer place. ## Let's See What Can You Do With This App! 1): Real-time Location Safety Rating, Fast And Secure Have you ever walked into a strange neighborhood and felt concerned about your safety? Well, say goodbye to these concerns. With RedFlags, you will get real-time location safety ratings sent straight to your phone! ShowCase Demo: <https://imgur.com/a/EaIiPZx> 2): Share and Receive Safety Alerts near you. A Powerful network Partnering up with Twilo, HyperTrack, and CockroachDB, Redflags is capable to track and plot your location every 5s. Therefore, if you ever encounter or see an incident, simply click the help button on the page and RedFlags will report it immediately. Under your consent, your location data will be sent to the cloud servers and shared with other users nearby, as well as the local authority/law enforcement forces. You will help alert all stakeholders at once and they can react promptly! Therefore, by using RedFlags, you are not only protecting yourself but EVERYONE around you as well! 3): Help you generate the safest route to your destination. By monitoring nearby incidents using our powerful Machine Learning caution detection algorithm, RedFlags can analyze and create the safest route to your destination by taking into consideration over 15+ scenarios and 32+ cases. Moreover, RedFlags is also capable of updating and optimizing your route hundreds of times per hour by utilizing HyperTrack. Route Before Using RedFlags : <https://imgur.com/a/ojZIDqV> Route After Using RedFlags: <https://imgur.com/a/pvHizcr> ## How We Build It Our application utilizes a number of key technologies which enable our application to provide the ideal user experience and functionality. To learn more, please visit our Github source code (linked below) for more information! ## Risks And Challenges As a team, making a quality app is nothing new for us. But behind every great app are greater challenges. Getting a machine learning algorithm with the functionality we want isn’t easy. We’re dealing with a lot of moving parts, from research, design, and development, to usability, and more. There are always setbacks when developing technology, however, we are confident that one day we will bring this app to reality. As we move forward in the future, we will keep all users up to date with our progress and let you know if we encounter any setbacks. ## What's next for Red Flag My first priority after the completion of the IOS App is to get an Android version RedFlags out as well. After that… well... We can't give anything away, but We have some rather ambitious goals for the future of this project. Stay Tuned!
## Inspiration Communication is so important in today's world. Therefore, it is unfair that it may not be accessible to some parts of the population. We wanted to provide an easy solution in order to empower these individuals and build an inclusive environment. ## What it does and how we built it We used the Symphonic Lab’s voiceless API in order to interpret lip syncing movements into text/transcripts for people with speech impairments, which can be visualized on an application like google meet through closed captions. Once transcribed, we used google translate’s text-to-speech function to convert that text into speech, so that others can hear the intended words. ## Challenges we ran into We ran into a couple of challenges when developing the project. Firstly, there were bugs in the Symphonic API which slowed down our progress. However, we were able to overcome this challenge, with the help of our wonderful mentors, and create a working prototype. ## Accomplishments that we're proud of Despite multiple technical errors, we persevered through our project and successfully came up with an MVP. We collaborated effectively under time constraints and integrated feedback from mentors to constantly improve the code. ## What we learned We took so much away from this experience. Learning the tech was definitely one aspect of it, but in the process we developed other real-world skills such as critical thinking, problem-solving, building user-centric design, collaborating and so much more! ## What's next for VoiScribe In the future, we plan to make it capable of processing live feed. We also plan to we plan to incorporate a sign language predictor that can detect sign language when lip-sync to speech fails. Lastly, we plan to make it a chrome extension so that it is easily accessible to the public!
partial
## Inspiration We often encountered challenges navigating automated call systems, which left us spending excessive time on hold and feeling frustrated. These experiences made us realize how much valuable time was being wasted when we could have been focusing on more productive tasks and we were left angry and wondering if there was a better way to optimize telecommunication systems. This frustration inspired us to develop a solution that streamlines the process, minimizing wait times and improving the overall customer experience. ## What it does The system includes a form where clients can enter their name, phone number, and a brief description of their issue, such as requesting a refund or returning an item. Once submitted, the VAPI system automatically places a call to the provided number. A virtual assistant then guides the client through a series of questions to better understand their problem with the VAPI system answering the questions based off the description given. The VAPI setup even handles the wait time on the client's behalf, ensuring they’re connected directly to the appropriate support agent without unnecessary delays. ## How we built it We implemented the solution using React.js for the front-end interface and VAPI for handling the automated calls. The form submission triggers the VAPI system, which initiates and manages the call flow. For version control and collaboration, we hosted the project on a GitHub repository, utilizing GitHub Actions for continuous integration and automated testing to ensure a smooth deployment process. We used llama within Groq as the LLM as we saw significant difference in response time when using groq vs openai. This setup allowed us to efficiently manage code updates and track changes while leveraging VAPI’s capabilities to handle real-time interactions with clients. ## Challenges we ran into We encountered challenges managing different branches, as the primary branch frequently stalled during the process. ## Accomplishments that we're proud of We were able to integrate the front end with the VAPI connector after clicking the submit button which took time, but we were persistent in solving the problem. ## What we learned We explored various functionalities within the React ecosystem, gaining a deeper understanding of tools and techniques available to enhance our applications. For instance, we learned about the @media query, which allows us to create responsive designs by applying different styles based on screen size and device characteristics. Additionally, we became proficient in utilizing VAPI to manage automated calls, including how to implement its features for efficient interaction with clients. This knowledge has equipped us to build more dynamic and user-friendly applications. ## What's next for Letmetalktohuman AI We aim to implement a feature that recognizes and performs specific dial tones, as these are a common part of phone interactions. This feature will enhance the user experience by allowing the system to respond to different inputs appropriately.
## Inspiration I got this idea because of the current hurricane Milton causing devastation across Florida. The inspiration behind *Autonomous AI Society* stems from the need for faster, more efficient, and autonomous systems that can make critical decisions during disaster situations. With multiple sponsors like Fetch.ai, Groq, Deepgram, Hyperbolic, and Vapi providing powerful tools, I envisioned an intelligent system of AI agents capable of handling a disaster response chain—from analyzing distress calls to dispatching drones and contacting rescue teams. The goal was to build an AI-driven solution that can streamline emergency responses, save lives, and minimize risks. ## What it does *Autonomous AI Society* is a fully autonomous multi-agent system that performs disaster response tasks in the following workflow: 1. **Distress Call Analysis**: The system first analyzes distress calls using Deepgram for speech-to-text and Hume AI to score distress levels. Based on the analysis, the agent identifies the most urgent calls and the city. 2. **Drone Dispatch**: The distress analyzer agent communicates with the drone agent (built using Fetch.ai) to dispatch drones to specific locations, assisting with flood and rescue operations. 3. **Human Detection**: Drones capture aerial images, which are analyzed by the human detection agent using Hyperbolic's LLaMA Vision model to detect humans in distress. The agent provides a description and coordinates. 4. **Priority-Based Action**: The drone results are displayed on a dashboard, ranked based on priority using Groq. Higher priority areas receive faster dispatches, and this is determined dynamically. 5. **Rescue Call**: The final agent, built using Vapi, places an emergency call to the rescue team. It uses instructions generated by Hyperbolic’s text model to give precise directions based on the detected individuals and their location. ## How I built it The system consists of five agents, all built using **Fetch.ai**’s framework, allowing them to interact autonomously and make real-time decisions: * **Request-sender agent** sends the initial requests. * **Distress analyzer agent** uses **Hume AI** to analyze calls and **Groq** to generate dramatic messages. * **Drone agent** dispatches drones to designated areas based on the distress score. * **Human detection agent** uses **Hyperbolic’s LLaMA Vision** to process images and detect humans in danger. * **Call rescue agent** sends audio instructions using **Deepgram**’s TTS and **Vapi** for automated phone calls. ## Challenges I ran into * **Simulating a drone movement on florida map**: The lat\_lon\_to\_pixel function converts latitude and longitude coordinates to pixel positions on the screen. The drone starts at the center of Florida. Its movement is calculated using trigonometry. The angle to the target city is calculated using math.atan2. The drone moves towards the target using sin and cos functions.This allows placing cities and the drone accurately on the map. * **Callibrating the map to right coordinates**: I had manually experiment with increasing and decreasing the coordinates to fit them at right spots on the florida map. * **Coordinating AI agents**: Getting agents to communicate effectively while working autonomously was a challenge. * **Handling dynamic priorities**: Ensuring real-time analysis and updating the priority of drone dispatch based on Groq's risk assessment was tricky. * **Integration of multiple APIs**: Each sponsor's tools had specific nuances, and integrating all of them smoothly, especially with Fetch.ai, required careful handling. ## Accomplishments that I am proud of * Successfully built an end-to-end autonomous system where AI agents can make intelligent decisions during a disaster, from distress call analysis to rescue actions. * Integrated cutting-edge technologies like **Fetch.ai**, **Groq**, **Hyperbolic**, **Deepgram**, and **Vapi** in a single project to create a highly functional and real-time response system. ## What I learned * **AI for disaster response**: Building systems that leverage multimodal AI agents can significantly improve response times and decision-making in life-critical scenarios. * **Cross-platform integration**: We learned how to seamlessly integrate various tools, from vision AI to TTS to drone dispatch, using **Fetch.ai** and sponsor technologies. * **Working with real-time data**: Developing an autonomous system that processes data in real-time provided insights into handling complex workflows. ## What's next for Autonomous AI Society * **Scaling to more disasters**: Expanding the system to handle other types of natural disasters like wildfires or earthquakes. * **Edge deployment**: Enabling drones and agents to run on the edge to reduce response times further. * **Improved human detection**: Enhancing human detection with more precise models to handle low-light or difficult visual conditions. * **Expanded rescue communication**: Integrating real-time communication with the victims themselves using Deepgram’s speech technology.
## 💡 Inspiration Whenever I was going through educational platforms, I always wanted to use one website to store everything. The notes, lectures, quizzes and even the courses were supposed to be accessed from different apps. I was inspired by how to create a centralized platform that acknowledges learning diversity. Also to enforce a platform where many people can **collaborate, learn and grow.** ## 🔎 What it does By using **Assembly AI** and incorporating a model which focuses on enhancing the user experience by providing **Speech-to-text** functionality. My application enforces a sense of security in which the person decides when to study, and then, they can choose from ML transcription with summarization and labels, studying techniques to optimize time and comprehension, and an ISR(Incremental Static Regeneration) platform which continuously provides support. **The tools used can be scaled as the contact with APIs and CMSs is easy to *vertically* scale**. ## 🚧 How we built it * **Frontend**: built in React but optimized with **NextJS** with extensive use of TailwindCSS and Chakra UI. * **Backend**: Authentication with Sanity CMS, Typescript and GraphQL/GROQ used to power a serverless async Webhook engine for an API Interface. * **Infrastructure**: All connected from **NodeJS** and implemented with *vertical* scaling technology. * **Machine learning**: Summarization/Transcription/Labels from the **AssemblyAI** API and then providing an optimized strategy for that. * **Branding, design and UI**: Elements created in Procreate and some docs in ChakraUI. * **Test video**: Using CapCut to add and remove videos. ## 🛑 Challenges we ran into * Implementing ISR technology to an app such as this required a lot of tension and troubleshooting. However, I made sure to complete it. * Including such successful models and making a connection with them was hard through typescript and axios. However, when learning the full version, we were fully ready to combat it and succeed. I actually have optimized one of the algorithm's attributes with asynchronous recursion. + Learning a Query Language such as **GROQ**(really similar to GraphQL) was difficult but we were able to use it with the Sanity plugin and use the **codebases** that was automatically used by them. ## ✔️ Accomplishments that we're proud of Literally, the front end and the backend required technologies and frameworks that were way beyond what I knew 3 months ago. **However I learned a lot in the space between to fuel my passion to learn**. But over the past few weeks, I planned and saw the docs of **AssemblyAI**, learned **GROQ**, implemented **ISR** and put that through a \**Content Management Service (CMS) \**. ## 📚 What we learned Throughout Hack the North 2022 and prior, I learned a variety of different frameworks, techniques, and APIs to build such an idea. When starting coding I felt like I was going ablaze as the techs were going together like **bread and butter**. ## 🔭 What's next for SlashNotes? While I was able to complete a considerable amount of the project in the given timeframe, there are still places where I can improve: * Implementation in the real world! I aim to push this out to google cloud. * Integration with school-course systems and proving the backend by adding more scaling and tips for user retention.
partial
## Inspiration Bill - "Blindness is a major problem today and we hope to have a solution that takes a step in solving this" George - "I like engineering" We hope our tool gives nonzero contribution to society. ## What it does Generates a description of a scene and reads the description for visually impaired people. Leverages CLIP/recent research advancements and own contributions to solve previously unsolved problem (taking a stab at the unsolved **generalized object detection** problem i.e. object detection without training labels) ## How we built it SenseSight consists of three modules: recorder, CLIP engine, and text2speech. ### Pipeline Overview Once the user presses the button, the recorder beams it to the compute cluster server. The server runs a temporally representative video frame through the CLIP engine. The CLIP engine is our novel pipeline that emulates human sight to generate a scene description. Finally, the generated description is sent back to the user side, where the text is converted to audio to be read. [Figures](https://docs.google.com/presentation/d/1bDhOHPD1013WLyUOAYK3WWlwhIR8Fm29_X44S9OTjrA/edit?usp=sharing) ### CLIP CLIP is a model proposed by OpenAI that maps images to embeddings via an image encoder and text to embeddings via a text encoder. Similiar (image, text) pairs will have a higher dot product. ### Image captioning with CLIP We can map the image embeddings to text embeddings via a simple MLP (since image -> text can be thought of as lossy compression). The mapped embedding is fed into a transformer decoder (GPT2) that is fine-tuned to produce text. This process is called CLIP text decoder. ### Recognition of Key Image Areas The issue with Image captioning the fed input is that an image is composed of smaller images. The CLIP text decoder is trained on only images containing one single content (e.g. ImageNet/MS CoCo images). We need to extract the crops of the objects in the image and then apply CLIP text decoder. This process is called **generalized object detection** **Generalized object detection** is unsolved. Most object detection involves training with labels. We propose a viable approach. We sample crops in the scene, just like how human eyes dart around their view. We evaluate the fidelity of these crops i.e. how much information/objects the crop contains by embedding the crop using clip and then searching a database of text embeddings. The database is composed of noun phrases that we extracted. The database can be huge, so we rely on SCANN (Google Research), a pipeline that uses machine learning based vector similarity search. We then filter all subpar crops. The remaining crops are selected using an algorithm that tries to maximize the spatial coverage of k crop. To do so, we sample many sets of k crops and select the set with the highest all pairs distance. ## Challenges we ran into The hackathon went smoothly, except for the minor inconvenience of getting the server + user side to run in sync. ## Accomplishments that we're proud of Platform replicates the human visual process with decent results. Subproblem is generalized object detection-- proposed approach involving CLIP embeddings and fast vector similarity search Got hardware + local + server (machine learning models on MIT cluster) + remote apis to work in sync ## What's next for SenseSight Better clip text decoder. Crops tend to generate redundant sentences, so additional pruning is needed. Use GPT3 to remove the redundancy and make the speech flower. Realtime can be accomplished by using real networking protocols instead of scp + time.sleep hacks. To accelerate inference on crops, we can do multi GPU. ## Fun Fact The logo is generated by DALL-E :p
## 🌟**Inspiration** There are over **7.2 million** people in the U.S. who are legally blind, many of whom rely on others to help them navigate and understand their environment. While technology holds the promise of increased independence, current solutions for the visually impaired often fall short—either lacking accessibility features like text-to-speech or offering overly complex interfaces. Optica was born out of a desire to bridge **this gap**. Our app empowers visually impaired individuals by giving them a simple, intuitive tool to perceive the world independently. Through clear, human-like descriptions of their surroundings, Optica provides not just information, but confidence, autonomy, and a deeper connection to their environment. ## 🛠️ **What it does** Optica transforms a smartphone into a tool of empowerment for the visually impaired, enabling users to independently understand their surroundings. With the press of a button, users receive clear, succinct, vivid audio descriptions of what the phone’s camera captures. Optica doesn’t just list objects; it paints a picture—communicating the relationships between objects and creating a true sense of place. Optica enables its users to engage with their environment without outside assistance. ## 🧱 **How we built it** We developed Optica using the ML Kit Object Detection API, which enabled us to identify and classify objects in real-time. These object classifications were then fed into a custom Large Language Model (LLM) powered by TuneStudio and Cerebras, which we trained to generate coherent, natural-language descriptions. The output from this LLM was integrated with Google Cloud’s text-to-speech API to provide users with real-time audio feedback. Throughout development, we maintained a user-first mindset, ensuring that the interface was intuitive and fully accessible. ## ⚔️ **Challenges we ran into** Developing Optica presented numerous technical and logistical challenges, particularly when it came to integrating various cutting-edge technologies. Deploying our object detection model in Android Studio took longer than anticipated, which limited the time we had to refine other components. Communicating between our computer vision model and TuneStudio’s LLM proved to be complex, requiring us to overcome issues with API integration and SDK compatibility. Additionally, managing the project across GitHub repositories introduced git-related challenges, particularly when merging contributions from different team members. However, these difficulties only strengthened our resolve and pushed us to learn new skills—especially in debugging, collaboration, and working across frameworks. Mentors played a crucial role in helping us push through these roadblocks, and the experience has made us better engineers and problem solvers! ## 🎖️ **Our Accomplishments** We are incredibly proud of our **integration of computer vision and natural language processing**, a combination that allows Optica to go beyond standard object recognition! Starting from a basic CV-based idea, we pushed the boundaries by incorporating an LLM to enhance the descriptions and truly serve the visually impaired community. None of us had experience with these APIs and learned so much on this journey! Our ability to bring together these powerful technologies to create a tool that can have a tangible, positive impact on people’s lives is an accomplishment we hold in high regard. Successfully deploying this onto a user-friendly platform was a milestone we are excited about. ## 📖 **What we learned** Although we might have learned new languages, APIs, and git commands on a technical level, the lessons we've learned **go beyond the pages**: * Setbacks are an inevitable part of the creative process, and staying adaptable allows you to turn challenges into opportunities! * Starting without all the answers taught us that taking the first step is crucial for personal and project development. We learned to not get ahead of ourselves and take it slow! * Reaching out for help from our mentors showed us the power of collaboration and shared knowledge. We would like to specifically mention Nifaseth and Harsh Deep for their help! ## ⏭️ **What's next for Optica** We plan to continually enhance the app by improving the accuracy and breadth of the image classification model, training it on more diverse datasets that include non-conventional settings and real-world complexity. Additionally, we aim to incorporate advanced depth sensing with Google AR’s depth API to provide even more nuanced scene descriptions. On the accessibility front, we will refine the voice activation and gesture-based navigation to make the app even more intuitive. We also look forward to partnering with organizations and sponsors, like Cerebras and TuneStudio, to ensure that **Optica continues to push the boundaries of AI for social good**, helping us realize our vision of full independence for the visually impaired.
## Inspiration Do you wish something could really help you get up in the morning ## What it does Sleepful uses machine vision to assist your sleep. It tracks if you have actually gotten out of bed. ## How we built it OpenCV, MEAN, Microsoft Azure ## Accomplishments that we're proud of An application that can track your sleep live ## What's next for Sleepful Sleepful has a huge potential to be an assistant for your sleep and morning. We hope to implement features to track quality of your sleep and guide you through your morning routine
winning
## Inspiration Thought it would be neat. ## What it does DashDot allows you to type a message on your android phone which gets transmitted as morse code to a pebble smartwatch, which can then be read by the user by feeling the watch's vibrations. The pebble app also allows you to respond to these messages also using morse code (and the 3 side buttons), which is translated back into words by the other user's android phone. The pebble app has 2 modes, one which displays the messages as text on the screen, and the other which appears as a clock, giving the appearance that you aren't receiving messages at all. Both modes vibrate the messages through morse code, meaning reading them off the screen is optional. The android app allows the user to type a message to be sent to the connected pebble, and also shows a transcript of the conversation so far. It also allows the user to activate the pebble app remotely to begin sending messages immediately. ## Controls The top button places a dot (or 'dit'), the bottom button places a dash (or 'dah'), and the center button sets the current set of dashes and dots as the next letter (it also automatically does this after 4 button presses since morse code has max 4 symbols per letter). Double clicking the top button finishes the current letter and sends the previously completed letters to the paired cellphone. Triple clicking the middle button switches between reading mode and clock mode. ## How I built it Android Studio and PebbleCloud, testing on a real pebble. ## Challenges I ran into I was originally going to provide the ability to tap the pebble for morse code input, but wasn't satisfied with the accuracy. I also ran into a few memory leaks which I eventually fixed.
## Inspiration We, as all students do, frequently and unwillingly fall to the powers of procrastination. This invention is for when the little cute Pomodoro and Screen Time reminders are a tad too easy to ignore. ## What it does The device sits in a predetermined area that you would not want to be in order to focus. For example beside your bed, on the couch, in front of your gaming PC/console. If it detects a person there, It will aim at you and fire projectiles. ## How we built it We built it by integrating a variety of technologies. Firstly, in terms of the frontend, it works with with an Android app developed using the Qualcomm HDK 8450 which has autonomous controls such as connecting to the projectile gun, turning on and off. The app also takes care of the ML Computer Vision needed in order to both detect people and where they are via Google's ML Kit. This then sends this information wirelessly via Bluetooth to an Arduino which is hooked up to two motors that control the aiming and firing of the projectile. The angle at which the projectile launcher turns is approximated with the user sitting 50-100cm away. ## Challenges we ran into We ran into multiple challenges during the project. Firstly, none of us had any experience developing an Android app and using an HDK8450, so we had a lot of ground to make up in order to start developing the app. Secondly, we found the Bluetooth module connection to be quite difficult to get working, as the official documentation seemed to be quite limited especially for beginners to Android development. ## Accomplishments that we're proud of One thing we are extremely proud of is the number of different systems and devices we got working together smoothly. From Computer Vision, to Bluetooth Protocols, to Arduino Programming and Mechanical Design, this project brought together a whole variety of fields, and we are proud to have been able to cover all of those bases as smoothly as we did. ## What we learned As beginners to Android development, we gained a plethora of knowledge on how to build, develop and deploy a working Android application. We also gained experience working with Arduinos, especially involving the communication aspects including sending and receiving information via Bluetooth. Finally we learned about deploying a working ML model in a solution of our own. ## What's next for Failure Management 101 We would like to add movement by putting the whole mechanism on wheels to allow it a greater degree of freedom. We also had plans for voice control, as well as plans for the robot to have access to your laptop in order to determine whether the user is on non-productive websites. Finally, in a from a more realistic and practical purpose, we could envision robots like these helping in patrolling/guard duty as an aid to policemen, although perhaps not firing paper projectiles anymore.
## Inspiration Our inspiration for the project stems from our experience with elderly and visually impaired people, and understanding that there is an imminent need for a solution that integrates AI to bring a new level of convenience and safety to modern day navigation tools. ## What it does IntelliCane firstly employs an ultrasonic sensor to identify any object, person, or thing within a 2 meter range and when that happens, a piezo buzzer alarm alerts the user. Simultaneously, a camera identifies the object infront of the user and provides them with voice feedback identifying what is infront of them. ## How we built it The project firstly employs an ultrasonic sensor to identify an object, person or thing close by. Then the piezo buzzer is turned on and alerts the user. Then the Picamera that is on the raspberrypi 5 identifies the object. We have employed a CNN algorithm to train the data and improve the accuracy of identifying the objects. From there this data is transferred to a text-to-speech function which provides voice feedback describing the object infront of them. The project was built on YOLO V8 platform. ## Challenges we ran into We ran into multiple problems during our project. For instance, we initially tried to used tensorflow, however due to the incompatibility of our version of python with the raspberrypi 5, we switched to the YOLO V8 Platform. ## Accomplishments that we're proud of There are many accomplishments we are proud of, such as successfully creating an the ultrasonic-piezo buzzer system for the arduino, and successfully mounting everything onto the PVC Pipe. However, we are most proud of developing a CNN algorithm that accurately identifies objects and provides voice feedback identifying the object that is infront of the user. ## What we learned We learned more about developing ML algorithms and became more proficient with the raspberrypi IDE. ## What's next for IntelliCane Next steps for IntelliCane include integrating GPS modules and Bluetooth modules to add another level of convenience to navigation tools.
losing
## Inspiration As most of our team became students here at the University of Waterloo, many of us had our first experience living in a shared space with roommates. Without the constant nagging by parents to clean up after ourselves that we found at home and some slightly unorganized roommates, many shared spaces in our residences and apartments like kitchen counters became cluttered and unusable. ## What it does CleanCue is a hardware product that tracks clutter in shared spaces using computer vision. By tracking unused items taking up valuable counter space and making speech and notification reminders, CleanCue encourages roommates to clean up after themselves. This product promotes individual accountability and respect, repairing relationships between roommates, and filling the need some of us have for nagging and reminders by parents. ## How we built it The current iteration of CleanCue is powered by a Raspberry Pi with a Camera Module sending a video stream to an Nvidia CUDA enabled laptop/desktop. The laptop is responsible for running our OpenCV object detection algorithms, which enable us to log how long items are left unattended and send appropriate reminders to a speaker or notification services. We used Cohere to create unique messages with personality to make it more like a maternal figure. Additionally, we used some TTS APIs to emulate a voice of a mother. ## Challenges we ran into Our original idea was to create a more granular product which would customize decluttering reminders based on the items detected. For example, this version of the product could detect perishable food items and make reminders to return items to the fridge to prevent food spoilage. However, the pre-trained OpenCV models that we used did not have enough variety in trained items and precision to support this goal, so we settled for this simpler version for this limited hackathon period. ## Accomplishments that we're proud of We are proud of our planning throughout the event, which allowed us to both complete our project while also enjoying the event. Additionally, we are proud of how we broke down our tasks at the beginning, and identified what our MVP was, so that when there were problems, we knew what our core priorities were. Lastly, we are glad we submitted a working project to Hack the North!!!! ## What we learned The core frameworks that our project is built out of were all new to the team. We had never used OpenCV or Taipy before, but had a lot of fun learning these tools. We also learned how to create improvised networking infrastructure to enable hardware prototyping in a public hackathon environment. Though not on the technical side, we also learned the importance of re-assessing if our solution actually was solving the problem we were intending to solve throughout the project and make necessary adjustments based on what we prioritized. Also, this was our first hardware hack! ## What's next for CleanCue We definitely want to improve our prototype to be able to more accurately describe a wide array of kitchen objects, enabling us to tackle more important issues like food waste prevention. Further, we also realized that the technology in this project can also aid individuals with dementia. We would also love to explore more in the mobile app development space. We would also love to use this to notify any dangers within the kitchen, for example, a young child getting too close to the stove, or an open fire left on for a long time. Additionally, we had constraints based on hardware availability, and ideally, we would love to use an Nvidia Jetson based platform for hardware compactness and flexibility.
## Inspiration One day, one of our teammates was throwing out garbage in his apartment complex and the building manager made him aware that certain plastics he was recycling were soft plastics that can't be recycled. According to a survey commissioned by Covanta, “2,000 Americans revealed that 62 percent of respondents worry that a lack of knowledge is causing them to recycle incorrectly (Waste360, 2019).” We then found that knowledge of long “Because the reward [and] the repercussions for recycling... aren’t necessarily immediate, it can be hard for people to make the association between their daily habits and those habits’ consequences (HuffingtonPost, 2016)”. From this research, we found that lack of knowledge or awareness can be detrimental to not only to personal life, but also to meeting government societal, environmental, and sustainability goals. ## What it does When an individual is unsure of how to dispose of an item, "Bin it" allows them to quickly scan the item and find out not only how to sort it (recycling, compost, etc.) but additional information regarding potential re-use and long-term impact. ## How I built it After brainstorming before the event, we built it by splitting roles into backend, frontend, and UX design/research. We concepted and prioritized features as we went based on secondary research, experimenting with code, and interviewing a few hackers at the event about recycling habits. We used Google Vision API for the object recognition / scanning process. We then used Vue and Flask for our development framework. ## Challenges I ran into We ran into challenges with deployment of the application due to . Getting set up was a challenge that was slowly overcome by our backend developers getting the team set up and troubleshooting. ## Accomplishments that I'm proud of We were able to work as a team towards a goal, learn, and have fun! We were also able work with multiple Google API's. We completed the core feature of our project. ## What I learned Learning to work with people in different roles was interesting. Also designing and developing from a technical stand point such as designing for a mobile web UI, deploying an app with Flask, and working with Google API's. ## What's next for Bin it We hope to review feedback and save this as a great hackathon project to potentially build on, and apply our learnings to future projects,
## Inspiration It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis? Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year. About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year. But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world. ## What it does PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers. ## How we built it We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map. ## Challenges we ran into React to properly display Storing multiple values into database at once (food item, exp date) How to display all firebase elements (doing proof of concept with console.log) Donated food being displayed before even clicking the button (fixed by using function for onclick here) Getting location of the user to be accessed and stored, not just longtitude/latitude Needing to log day that a food was gotten Deleting an item when expired. Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore) How to delete the food from the Firebase (but weird bc of weird doc ID) Predicting when non-labeled foods expire. (using OpenAI) ## Accomplishments that we're proud of * We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date. * Integrating the API that helps us figure out our location from the latitudes and longitudes. * Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations. ## What we learned We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away. ## What's next for PantryPuzzle We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want. We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date. We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
partial
## Inspiration The main inspiration of the idea was in our own personal experience when we are in the middle of a live lecture or watching a pre recorded zoom lecture, and we had a question regarding the topic. The process of googling a result one the side while the lecture is going on is usually a herculean task that involves managing multiple tabs and usually always falling behind on the notes. Our website solves this problem. ## What it does Zazu is a video call platform designed for teachers and students. It allows students to have access to an AI TA right in their video call so that they can ask questions to the TA while not disturbing the teacher and the flow of the lecture. The teacher can also use a slew of AI powered features, such as generating quiz questions on demand to ask the students to check engagement. The teacher can also generate polls in lecture simply by asking Zazu to "generate poll" and a poll will be sent to all students instantly. At the end of the lecture once a student leaves call, a summary is created of the lecture with bullet point notes for the student, basically eliminating the need for students to manually take notes. The video call platform also supports all the features of Zoom and has AI features built on top of it. ## How we built it The front end of the website was built using the Dolby.io framework and React which allowed us to quickly implement most of the video call features. The backend is powered by Flask where we make multiple open ai requests to their Davinci model to get our results. We use Firebase Cloud Firestore to store our data and socket.io to communicate data between our server and client. We hosted our server on [server]. Most of the development revolved around building out features in React for the front end and building out API endpoints in Python on the backend. We heavily use OpenAI's Davinci LLM and a lot of time was spent prompt engineering to get the data that we require for each of our different features. ## Challenges we ran into The main challenge we ran into was getting the data from the Davinci LLM in a format that was consistent with every API call. We had to spend a large part of our time designing specific prompts for the LLM so that we get the response in the same format every time regardless of the topic of the lecture. We also had to spend a decent chunk of time on parsing the transcript of the lecture and filtering out meaningless remarks and noise in the data. To solve this, we had to use NLP filters in the backend and models to periodically summarize the transcript which makes it easier to send to the LLM to get results. ## Accomplishments that we're proud of We are extremely proud of the verbal speech-to-text commands that we built to help support teachers during their lecture. By simply invoking the command, "generate quiz", a multiple choice question about the topic of the lecture is generated by the Davinci LLM and instantly sent to all the students on call. This is powered by a slew of APIs and frameworks which is extremely impressive since our team is made up of mostly beginner hackers and this is most of our's first hackathon. ## What we learned The main concept we learned was how to build an end-to-end application with a complete feature set. We also got a significant amount of experience in using OpenAI's APIs and Dolby.io framework to create a front end video call app. Large portion of the time was dedicated to bug fixing and we learned a lot about version control and how to build a large project in parallel with different members contributing to different features. ## What's next for Zazu Zazu has the potential to become a great tool for educators around the world. The immediate next steps are to expand the feature set and make it complete with all the features of Zoom. We would also want to add support for using our feature set not only on video calls but also on pre-recorded lectures. Then we would ideally publish the app and market it to teachers, specifically college professors where this would largely come into play.
## 💡 Inspiration You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?” If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies. We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit. ## 🔍 What it does First, our AI-powered summarization engine creates a set of live notes based on the current lecture. Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos! Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind. ## ⭐ Feature List * Dashboard with all your notes * Summarizes your lectures automatically * Select/Highlight text from your online lectures * Organize your notes with intuitive UI * Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime * Text simplification, definitions, and synonyms anywhere on the web * DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster. ## ⚙️ Our Tech Stack * Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API, * Web Application: Chakra UI + React.js, Next.js, Vercel * Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js * Infrastructure: Firebase/Firestore ## 🚧 Challenges we ran into * Completing our project within the time constraint * There was many APIs to integrate, making us spend a lot of time debugging * Working with Google Chrome Extension, which we had never worked with before. ## ✔️ Accomplishments that we're proud of * Learning how to work with Google Chrome Extensions, which was an entirely new concept for us. * Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use. ## 📚 What we learned * The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out! * Working on a project where you can relate helps a lot with motivation * Chakra UI is legendary and a lifesaver * The Chrome Extension API is very difficult, did we mention that already? ## 🔭 What's next for AcadeME? * Implementing a language translation toggle to help international students * Note Encryption * Note Sharing Links * A Distributive Quiz mode, for online users!
## Inspiration We're students, and that means one of our biggest inspirations (and some of our most frustrating problems) come from a daily ritual - lectures. Some professors are fantastic. But let's face it, many professors could use some constructive criticism when it comes to their presentation skills. Whether it's talking too fast, speaking too *quietly* or simply not paying attention to the real-time concerns of the class, we've all been there. **Enter LectureBuddy.** ## What it does Inspired by lackluster lectures and little to no interfacing time with professors, LectureBuddy allows students to signal their instructors with teaching concerns at the spot while also providing feedback to the instructor about the mood and sentiment of the class. By creating a web-based platform, instructors can create sessions from the familiarity of their smartphone or laptop. Students can then provide live feedback to their instructor by logging in with an appropriate session ID. At the same time, a camera intermittently analyzes the faces of students and provides the instructor with a live average-mood for the class. Students are also given a chat room for the session to discuss material and ask each other questions. At the end of the session, Lexalytics API is used to parse the chat room text and provides the instructor with the average tone of the conversations that took place. Another important use for LectureBuddy is an alternative to tedious USATs or other instructor evaluation forms. Currently, teacher evaluations are completed at the end of terms and students are frankly no longer interested in providing critiques as any change will not benefit them. LectureBuddy’s live feedback and student interactivity provides the instructor with consistent information. This can allow them to adapt their teaching styles and change topics to better suit the needs of the current class. ## How I built it LectureBuddy is a web-based application; most of the developing was done in JavaScript, Node.js, HTML/CSS, etc. The Lexalytics Semantria API was used for parsing the chat room data and Microsoft’s Cognitive Services API for emotions was used to gauge the mood of a class. Other smaller JavaScript libraries were also utilised. ## Challenges I ran into The Lexalytics Semantria API proved to be a challenge to set up. The out-of-the box javascript files came with some errors, and after spending a few hours with mentors troubleshooting, the team finally managed to get the node.js version to work. ## Accomplishments that I'm proud of Two first-time hackers contributed some awesome work to the project! ## What I learned "I learned that json is a javascript object notation... I think" - Hazik "I learned how to work with node.js - I mean I've worked with it before, but I didn't really know what I was doing. Now I sort of know what I'm doing!" - Victoria "I should probably use bootstrap for things" - Haoda "I learned how to install mongoDB in a way that almost works" - Haoda "I learned some stuff about Microsoft" - Edwin ## What's next for Lecture Buddy * Multiple Sessions * Further in-depth analytics from an entire semester's worth of lectures * Pebble / Wearable integration! @Deloitte See our video pitch!
partial
## 💡 Inspiration Generation Z is all about renting - buying land is simply out of our budgets. But the tides are changing: with Pocket Plots, an entirely new generation can unlock the power of land ownership without a budget. Traditional land ownership goes like this: you find a property, spend weeks negotiating a price, and secure a loan. Then, you have to pay out agents, contractors, utilities, and more. Next, you have to go through legal documents, processing, and more. All while you are shelling out tens to hundreds of thousands of dollars. Yuck. Pocket Plots handles all of that for you. We, as a future LLC, buy up large parcels of land, stacking over 10 acres per purchase. Under the company name, we automatically generate internal contracts that outline a customer's rights to a certain portion of the land, defined by 4 coordinate points on a map. Each parcel is now divided into individual plots ranging from 1,000 to 10,000 sq ft, and only one person can own a contract to each plot to the plot. This is what makes us fundamentally novel: we simulate land ownership without needing to physically create deeds for every person. This skips all the costs and legal details of creating deeds and gives everyone the opportunity to land ownership. These contracts are 99 years and infinitely renewable, so when it's time to sell, you'll have buyers flocking to buy from you first. You can try out our app here: <https://warm-cendol-1db56b.netlify.app/> (AI features are available locally. Please check our Github repo for more.) ## ⚙️What it does ### Buy land like it's ebay: ![](https://i.imgur.com/PP5BjxF.png) We aren't just a business: we're a platform. Our technology allows for fast transactions, instant legal document generation, and resale of properties like it's the world's first ebay land marketplace. We've not just a business. We've got what it takes to launch your next biggest investment. ### Pocket as a new financial asset class... In fintech, the last boom has been in blockchain. But after FTX and the bitcoin crash, cryptocurrency has been shaken up: blockchain is no longer the future of finance. Instead, the market is shifting into tangible assets, and at the forefront of this is land. However, land investments have been gatekept by the wealthy, leaving little opportunity for an entire generation That's where pocket comes in. By following our novel perpetual-lease model, we sell contracts to tangible buildable plots of land on our properties for pennies on the dollar. We buy the land, and you buy the contract. It's that simple. We take care of everything legal: the deeds, easements, taxes, logistics, and costs. No more expensive real estate agents, commissions, and hefty fees. With the power of Pocket, we give you land for just $99, no strings attached. With our resell marketplace, you can sell your land the exact same way we sell ours: on our very own website. We handle all logistics, from the legal forms to the system data - and give you 100% of the sell value, with no seller fees at all. We even will run ads for you, giving your investment free attention. So how much return does a Pocket Plot bring? Well, once a parcel sells out its plots, it's gone - whoever wants to buy land from that parcel has to buy from you. We've seen plots sell for 3x the original investment value in under one week. Now how insane is that? The tides are shifting, and Pocket is leading the way. ### ...powered by artificial intelligence **Caption generation** *Pocket Plots* scrapes data from sites like Landwatch to find plots of land available for purchase. Most land postings lack insightful descriptions of their plots, making it hard for users to find the exact type of land they want. With *Pocket Plots*, we transformed links into images, into helpful captions. ![](https://i.imgur.com/drgwbft.jpg) **Captions → Personalized recommendations** These captions also inform the user's recommended plots and what parcels they might buy. Along with inputting preferences like desired price range or size of land, the user can submit a text description of what kind of land they want. For example, do they want a flat terrain or a lot of mountains? Do they want to be near a body of water? This description is compared with the generated captions to help pick the user's best match! ![](https://i.imgur.com/poTXYnD.jpg) ### **Chatbot** Minute Land can be confusing. All the legal confusion, the way we work, and how we make land so affordable makes our operations a mystery to many. That is why we developed a supplemental AI chatbot that has learned our system and can answer questions about how we operate. *Pocket Plots* offers a built-in chatbot service to automate question-answering for clients with questions about how the application works. Powered by openAI, our chat bot reads our community forums and uses previous questions to best help you. ![](https://i.imgur.com/dVAJqOC.png) ## 🛠️ How we built it Our AI focused products (chatbot, caption generation, and recommendation system) run on Python, OpenAI products, and Huggingface transformers. We also used a conglomerate of other related libraries as needed. Our front-end was primarily built with Tailwind, MaterialUI, and React. For AI focused tasks, we also used Streamlit to speed up deployment. ### We run on Convex We spent a long time mastering Convex, and it was worth it. With Convex's powerful backend services, we did not need to spend infinite amounts of time developing it out, and instead, we could focus on making the most aesthetically pleasing UI possible. ### Checkbook makes payments easy and fast We are an e-commerce site for land and rely heavily on payments. While stripe and other platforms offer that capability, nothing compares to what Checkbook has allowed us to do: send invoices with just an email. Utilizing Checkbook's powerful API, we were able to integrate Checkbook into our system for safe and fast transactions, and down the line, we will use it to pay out our sellers without needing them to jump through stripe's 10 different hoops. ## 🤔 Challenges we ran into Our biggest challenge was synthesizing all of our individual features together into one cohesive project, with compatible front and back-end. Building a project that relied on so many different technologies was also pretty difficult, especially with regards to AI-based features. For example, we built a downstream task, where we had to both generate captions from images, and use those outputs to create a recommendation algorithm. ## 😎 Accomplishments that we're proud of We are proud of building several completely functional features for *Pocket Plots*. We're especially excited about our applications of AI, and how they make users' *Pocket Plots* experience more customizable and unique. ## 🧠 What we learned We learned a lot about combining different technologies and fusing our diverse skillsets with each other. We also learned a lot about using some of the hackathon's sponsor products, like Convex and OpenAI. ## 🔎 What's next for Pocket Plots We hope to expand *Pocket Plots* to have a real user base. We think our idea has real potential commercially. Supplemental AI features also provide a strong technological advantage.
## Inspiration 💡 **Due to rising real estate prices, many students are failing to find proper housing, and many landlords are failing to find good tenants**. Students looking for houses often have to hire some agent to get a nice place with a decent landlord. The same goes for house owners who need to hire agents to get good tenants. *The irony is that the agent is totally motivated by sheer commission and not by the wellbeing of any of the above two.* Lack of communication is another issue as most of the things are conveyed by a middle person. It often leads to miscommunication between the house owner and the tenant, as they interpret the same rent agreement differently. Expensive and time-consuming background checks of potential tenants are also prevalent, as landowners try to use every tool at their disposal to know if the person is really capable of paying rent on time, etc. Considering that current rent laws give tenants considerable power, it's very reasonable for landlords to perform background checks! Existing online platforms can help us know which apartments are vacant in a locality, but they don't help either party know if the other person is really good! Their ranking algorithms aren't trustable with tenants. The landlords are also reluctant to use these services as they need to manually review applications from thousands of unverified individuals or even bots! We observed that we are still using these old-age non-scalable methods to match the home seeker and homeowners willing to rent their place in this digital world! And we wish to change it with **RentEasy!** ![Tech-Stack](https://ipfs.infura.io/ipfs/QmRco7zU8Vd9YFv5r9PYKmuvsxxL497AeHSnLiu8acAgCk) ## What it does 🤔 In this hackathon, we built a cross-platform mobile app that is trustable by both potential tenants and house owners. The app implements a *rating system* where the students/tenants can give ratings for a house/landlord (ex: did not pay security deposit back for no reason), & the landlords can provide ratings for tenants (the house was not clean). In this way, clean tenants and honest landlords can meet each other. This platform also helps the two stakeholders build an easily understandable contract that will establish better trust and mutual harmony. The contract is stored on an InterPlanetary File System (IPFS) and cannot be tampered by anyone. ![Tech-Stack](https://ipfs.infura.io/ipfs/QmezGvDFVXWHP413JFke1eWoxBnpTk9bK82Dbu7enQHLsc) Our application also has an end-to-end encrypted chatting module powered by @ Company. The landlords can filter through all the requests and send requests to tenants. This chatting module powers our contract generator module, where the two parties can discuss a particular agreement clause and decide whether to include it or not in the final contract. ## How we built it ️⚙️ Our beautiful and elegant mobile application was built using a cross-platform framework flutter. We integrated the Google Maps SDK to build a map where the users can explore all the listings and used geocoding API to encode the addresses to geopoints. We wanted our clients a sleek experience and have minimal overhead, so we exported all network heavy and resource-intensive tasks to firebase cloud functions. Our application also has a dedicated **end to end encrypted** chatting module powered by the **@-Company** SDK. The contract generator module is built with best practices and which the users can use to make a contract after having thorough private discussions. Once both parties are satisfied, we create the contract in PDF format and use Infura API to upload it to IPFS via the official [Filecoin gateway](https://www.ipfs.io/ipfs) ![Tech-Stack](https://ipfs.infura.io/ipfs/QmaGa8Um7xgFJ8aa9wcEgSqAJZjggmVyUW6Jm5QxtcMX1B) ## Challenges we ran into 🧱 1. It was the first time we were trying to integrate the **@-company SDK** into our project. Although the SDK simplifies the end to end, we still had to explore a lot of resources and ask for assistance from representatives to get the final working build. It was very gruelling at first, but in the end, we all are really proud of having a dedicated end to end messaging module on our platform. 2. We used Firebase functions to build scalable serverless functions and used expressjs as a framework for convenience. Things were working fine locally, but our middleware functions like multer, urlencoder, and jsonencoder weren't working on the server. It took us more than 4 hours to know that "Firebase performs a lot of implicit parsing", and before these middleware functions get the data, Firebase already had removed them. As a result, we had to write the low-level encoding logic ourselves! After deploying these, the sense of satisfaction we got was immense, and now we appreciate millions of open source packages much more than ever. ## Accomplishments that we're proud of ✨ We are proud of finishing the project on time which seemed like a tough task as we started working on it quite late due to other commitments and were also able to add most of the features that we envisioned for the app during ideation. Moreover, we learned a lot about new web technologies and libraries that we could incorporate into our project to meet our unique needs. We also learned how to maintain great communication among all teammates. Each of us felt like a great addition to the team. From the backend, frontend, research, and design, we are proud of the great number of things we have built within 36 hours. And as always, working overnight was pretty fun! :) --- ## Design 🎨 We were heavily inspired by the revised version of **Iterative** design process, which not only includes visual design, but a full-fledged research cycle in which you must discover and define your problem before tackling your solution & then finally deploy it. ![Double-Diamond](https://ipfs.infura.io/ipfs/QmPDLVVpsJ9NvJZU2SdaKoidUZNSDJPhC2SQAB8Hh66ZDf) This time went for the minimalist **Material UI** design. We utilized design tools like Figma, Photoshop & Illustrator to prototype our designs before doing any coding. Through this, we are able to get iterative feedback so that we spend less time re-writing code. ![Brand-identity](https://ipfs.infura.io/ipfs/QmUriwycp6S98HtsA2KpVexLz2CP3yUBmkbwtwkCszpq5P) --- # Research 📚 Research is the key to empathizing with users: we found our specific user group early and that paves the way for our whole project. Here are few of the resources that were helpful to us — * Legal Validity Of A Rent Agreement : <https://bit.ly/3vCcZfO> * 2020-21 Top Ten Issues Affecting Real Estate : <https://bit.ly/2XF7YXc> * Landlord and Tenant Causes of Action: "When Things go Wrong" : <https://bit.ly/3BemMtA> * Landlord-Tenant Law : <https://bit.ly/3ptwmGR> * Landlord-tenant disputes arbitrable when not covered by rent control : <https://bit.ly/2Zrpf7d> * What Happens If One Party Fails To Honour Sale Agreement? : <https://bit.ly/3nr86ST> * When Can a Buyer Terminate a Contract in Real Estate? : <https://bit.ly/3vDexWO> **CREDITS** * Design Resources : Freepik, Behance * Icons : Icons8 * Font : Semibold / Montserrat / Roboto / Recoleta --- # Takeways ## What we learned 🙌 **Sleep is very important!** 🤐 Well, jokes apart, this was an introduction to **Web3** & **Blockchain** technologies for some of us and introduction to mobile app developent to other. We managed to improve on our teamwork by actively discussing how we are planning to build it and how to make sure we make the best of our time. We learned a lot about atsign API and end-to-end encryption and how it works in the backend. We also practiced utilizing cloud functions to automate and ease the process of development. ## What's next for RentEasy 🚀 **We would like to make it a default standard of the housing market** and consider all the legal aspects too! It would be great to see rental application system more organized in the future. We are planning to implement more additional features such as landlord's view where he/she can go through the applicants and filter them through giving the landlord more options. Furthermore we are planning to launch it near university campuses since this is where people with least housing experience live. Since the framework we used can be used for any type of operating system, it gives us the flexibility to test and learn. **Note** — **API credentials have been revoked. If you want to run the same on your local, use your own credentials.**
## Inspiration During the past summer, we experienced the struggles of finding subletters for our apartment. Ads were posted in various locations, ranging from Facebook to WeChat. Our feeds were filled with other people looking for people to sublet as well. As a result, we decided to create Subletting Made Easy. We envision a platform where the process for students looking for a place to stay as well as students looking to rent their apartment out is as simple as possible. ## What it does Our application provides an easy-to-use interface for both students looking for subletters, and studentsseeking sublets to find the right people/apartments. ## Challenges we ran into Aside from building a clean UI and adding correct functionality, we wanted to create an extremely secure platform for each user on our app. Integrating multiple authentication tools from the Firebase and Docusign API caused various roadblocks in our application development. Additionally, despite working earlier in the morning, we ran into an Authentication Error when trying to access the HTTP Get REST API call within Click API, thus inhibiting our ability to verify the registration status of users. ## What we learned We learned a lot about the process of building an application from scratch, from front-end/UI design to back-end/database integration. ## What's next We built a functional MVP during this hackathon, but we want to expand our app to include more features such as adding secure payments and more methods to search and filter results. There's tons of possibilities for what we can add for the future to help students around the globe find sublets and subleters.
winning
## Inspiration On average, an EMT can take 10 minutes to arrive at the scene of an emergency while incidents such as choking or heart attacks can turn fatal within 3 minutes. Those 10 minutes between the start of the emergency and when help arrives are vital in the patient's survival. ## What it does Any surrounding good-Samaritan may use the app, press SOS, use their voice to explain the situation, and the app will ping nearby certified CPR, EMT, or any person will relevant experience who can arrive on the scene before 911 can. HelpSignal is used to make the most of the time between the start of an emergency and when ambulances arrive ## How we built it We used React Native and Expo Development to build the application, targeting Android for live voice transcription from expo-speech-recognition and sending the transcription after recording to Cloudflare Worker. The Cloudflare Worker then uses the BAAI general embedding model to vectorize the transcription. The categories of needed certifications or experience are in a vector database, and vector search is done to get the most relevant person for the situation. The account system is on Amazon RDS, as well as the current emergencies. After an emergency is categorized, it's put onto the database, which is called on every refresh by people with accounts and certifications. A map is shown on the page to show locations of emergencies. ## Challenges we ran into We had difficulty implementing the audio as none of us had access to iOS development kit, nor macOS laptops for running Expo Development on iOS. In order to record and collect audio to transcribe live, an Android system was needed. We spent a considerable amount of time setting up the Android SDK. ## Accomplishments that we're proud of Throughout this project, we encountered many different roadblocks, which required determination and flexibility to get around. As a group, we were able to effectively communicate and pivot roles on the fly. As a result, we all stayed occupied and spent all 36 hours wisely designing and implementing different systems. Our feature of using Cloudflare Workers to use vector search was a big accomplishment for us, as well as getting authentication and accounts working with the stored certifications and experience, and an engaging UI/UX. ## What we learned Coming into this project, few of us had experience with React Native, and some of us had no experience coding with TypeScript and React in general. This seeming roadblock forced us to learn syntax and techniques for working with the technologies on the fly. Additionally, getting Expo Development working with Gradle and running on an Android simulator was a big learning experience for how Android development works. ## What's next for HelpSignal Being able to grow HelpSignal through advertising and social media would not only allow HelpSignal to become more popular, but would also improve the app. As more and more users get onboarded, there's more people available to help others, and therefore more of a chance that there's people to help in case of emergency. Using WebSockets instead of database updating for the emergencies would also let updating of emergencies be more instantaneous, and push notifications would allow for people not currently using the app to be notified when someone needs help. Connecting users with 911 while submitting an emergency would also allow for police to still be notified as normal. ![Tech stack](https://github.com/josephHelfenbein/HelpSignal/blob/e2f312bb462b1c0eb7dea3082bbb18cdbfa2022a/techstack.png?raw=true)
## Inspiration This project was inspired by Leon's father's first-hand experience with a lack of electronic medical records and realizing the need for more accessible patient experience. ## What it does The system stores patients' medical records. it also allows patients to fill out medical forms using their voice, as well as electronically sign using their voice as well. Our theme while building it was accessibility, hence the voice control integration, simple and easy to understand UI, and big, bold colours. ## How I built it The front end is built on react-native, while the background is built in node.js using MongoDB Atlas as our database. For our speed to text processing, we used the Google Cloud Platform. We also used Twilio for our SMS reminder component. ## Challenges We ran into There are three distinct challenges that we ran into. The first was trying to get Twilio to function correctly within the app. We were trying to use it on the frontend but due to the nature of react native, and some node.js libraries that were being used, it was not working. We solved the problem by deploying to a Heroku serving and making REST calls. A second challenge was trying to get the database queries to work from our backend. Although everything seems right it still did not work but to do attention to detail, and going over code multiple times, the mistake was spotted and corrected. The third and likely biggest challenge we faced was getting the speech to text streaming input to co-operate. In the beginning, it did not stop recording at the correct times and would capture a lot of noise from the background. This problem was eventually solved by redoing it by following a tutorial online. ## Accomplishments that I'm proud of **WE FINISHED!** We honestly did not expect to finish if you asked us at 10 pm on Saturday night. However, things came through well which we were really proud of. We are also really proud of our UI/UX and think it is a very sleek and clean design. Two other things include accurate speech to text processing and dynamically filled values through our database at runtime. ## What I learned **Joshua** - How to write server-side Javascript using node.js **Leon** - Twilio **Joy** - Speech to text streaming with react native **Kevin** - React-native ## What's next for MediSign If we were to continue to work on this project, we would first start by dynamically filling all values through our database. We would then focus a lot of attention on security as medical records are sensitive info. Thirdly, we would upgrade the UI/UX to be even better than before.
## Inspiration Nowadays, large corporations are spending more and more money nowadays on digital media advertising, but their data collection tools have not been improving at the same rate. Nike spent over $3.03 billion on advertising alone in 2014, which amounted to approximately $100 per second, yet they only received a marginal increase in profits that year. This is where Scout comes in. ## What it does Scout uses a webcam to capture facial feature data about the user. It sends this data through a facial recognition engine in Microsoft Azure's Cognitive Services to determine demographics information, such as gender and age. It also captures facial expressions throughout an Internet browsing session, say a video commercial, and applies sentiment analysis machine learning algorithms to instantaneously determine the user's emotional state at any given point during the video. This is also done through Microsoft Azure's Cognitive Services. Content publishers can then aggregate this data and analyze it later to determine which creatives were positive and which creatives generated a negative sentiment. Scout follows an opt-in philosophy, so users must actively turn on the webcam to be a subject in Scout. We highly encourage content publishers to incentivize users to participate in Scout (something like $100/second) so that both parties can benefit from this platform. We also take privacy very seriously! That is why photos taken through the webcam by Scout are not persisted anywhere and we do not collect any personal user information. ## How we built it The platform is built on top of a Flask server hosted on an Ubuntu 16.04 instance in Azure's Virtual Machines service. We use nginx, uWSGI, and supervisord to run and maintain our web application. The front-end is built with Google's Materialize UI and we use Plotly for complex analytics visualization. The facial recognition and sentiment analysis intelligence modules are from Azure's Cognitive Services suite, and we use Azure's SQL Server to persist aggregated data. We also have an Azure Chatbot Service for data analysts to quickly see insights. ## Challenges we ran into **CORS CORS CORS!.** Cross-Origin Resource Sharing was a huge pain in the head for us. We divided the project into three main components: the Flask backend, the UI/UX visualization, and the webcam photo collection+analysis. We each developed our modules independently of each other, but when we tried to integrate them together, we ran into a huge number of CORS issues with the REST API endpoints that were on our Flask server. We were able to resolve this with a couple of extra libraries but definitely a challenge figuring out where these errors were coming from. SSL was another issue we ran into. In 2015, Google released a new WebRTC Policy that prevented webcam's from being accessed on insecure (HTTP) sites in Chrome, with the exception of localhost. This forced us to use OpenSSL to generate self-signed certificates and reconfigure our nginx routes to serve our site over HTTPS. As one can imagine, this caused havoc for our testing suites and our original endpoints. It forced us to resift through most of the code we had already written to accommodate this change in protocol. We don't like implementing HTTPS, and neither does Flask apparently. On top of our code, we had to reconfigure the firewalls on our servers which only added more time wasted in this short hackathon. ## Accomplishments that we're proud of We were able to multi-process our consumer application to handle the massive amount of data we were sending back to the server (2 photos taken by the webcam each second, each photo is relatively high quality and high memory). We were also able to get our chat bot to communicate with our REST endpoints on our Flask server, so any metric in our web portal is also accessible in Messenger, Skype, Kik, or whatever messaging platform you prefer. This allows marketing analysts who are frequently on the road to easily review the emotional data on Scout's platform. ## What we learned When you stack cups, start with a 3x3 base and stack them in inverted directions. ## What's next for Scout You tell us! Please feel free to contact us with your ideas, questions, comments, and concerns!
losing
## Inspiration Blip emerged from a simple observation: in our fast-paced world, long-form content often goes unheard. Inspired by the success of short-form video platforms like Tiktok, we set out to revolutionize the audio space. ## What it does Our vision is to create a platform where bite-sized audio clips could deliver maximum impact, allowing users to learn, stay informed, and be entertained in the snippets of time they have available throughout their day. Blip is precisely that. Blip offers a curated collection of short audio clips, personalized to each user's interests and schedule, ensuring they get the most relevant and engaging content whenever they have a few minutes to spare. ## How we built it Building Blip was a journey that pushed our technical skills to new heights. We used a modern tech stack including TypeScript, NextJS, and TailwindCSS to create a responsive and intuitive user interface. The backend, powered by NextJS and enhanced with OpenAI and Cerebras APIs, presented unique challenges in processing and serving audio content efficiently. One of our proudest accomplishments was implementing an auto-play algorithm that allows users to listen to similar Blips, but also occasionally recommends more unique content. ## Challenges we ran into The backend, powered by NextJS and enhanced with OpenAI and Cerebras APIs, presented unique challenges in processing and serving audio content efficiently. We had to make sure that no more audio clips than necessary were loaded at anytime to ensure browser speed optimality. ## Accomplishments that we're proud of One of our proudest accomplishments was implementing an auto-play algorithm that allows users to listen to similar Blips, but also occasionally recommends more unique content. It allows users to listen to what they are comfortable with, yet also allows them to branch out. ## What we learned Throughout the development process, we encountered and overcame numerous hurdles. Optimizing audio playback for seamless transitions between clips, ensuring UI-responsiveness, and efficiently utilizing sponsor APIs were just a few of the obstacles we faced. These challenges not only improved our problem-solving skills but also deepened our understanding of audio processing technologies and user experience design. ## What's next for Blip The journey of creating Blip has been incredibly rewarding. We've learned the importance of user-centric design, found a new untapped market for entertainment, and harnessed the power of AI in enhancing content discovery and generation. Looking ahead, we're excited about the potential of Blip to transform how people consume audio content. Our roadmap includes expanding our content categories, scaling up our recommendation algorithm, and exploring partnerships with content creators and educators to bring even more diverse and engaging content to our platform. Blip is more than just an app; it's a new way of thinking about audio content in the digital age. We're proud to have created a platform that makes learning and staying informed more accessible and enjoyable for everyone, regardless of their busy schedules. As we move forward, we're committed to continually improving and expanding Blip, always with our core mission in mind: to turn little moments into big ideas, one short-cast at a time.
## Inspiration Imagine you're sitting in your favorite coffee shop and a unicorn startup idea pops into your head. You open your laptop and choose from a myriad selection of productivity tools to jot your idea down. It’s so fresh in your brain, you don’t want to waste any time so, fervently you type, thinking of your new idea and its tangential components. After a rush of pure ideation, you take a breath to admire your work, but disappointment. Unfortunately, now the hard work begins, you go back though your work, excavating key ideas and organizing them. ***Eddy is a brainstorming tool that brings autopilot to ideation. Sit down. Speak. And watch Eddy organize your ideas for you.*** ## Learnings Melding speech recognition and natural language processing tools required us to learn how to transcribe live audio, determine sentences from a corpus of text, and calculate the similarity of each sentence. Using complex and novel technology, each team-member took a holistic approach and learned news implementation skills on all sides of the stack. ## Features 1. **Live mindmap**—Automatically organize your stream of consciousness by simply talking. Using semantic search, Eddy organizes your ideas into coherent groups to help you find the signal through the noise. 2. **Summary Generation**—Helpful for live note taking, our summary feature converts the graph into a Markdown-like format. 3. **One-click UI**—Simply hit the record button and let your ideas do the talking. 4. **Team Meetings**—No more notetakers: facilitate team discussions through visualizations and generated notes in the background. ![The Eddy TechStack](https://i.imgur.com/FfsypZt.png) ## Challenges 1. **Live Speech Chunking** - To extract coherent ideas from a user’s speech, while processing the audio live, we had to design a paradigm that parses overlapping intervals of speech, creates a disjoint union of the sentences, and then sends these two distinct groups to our NLP model for similarity. 2. **API Rate Limits**—OpenAI rate-limits required a more efficient processing mechanism for the audio and fewer round trip requests keyword extraction and embeddings. 3. **Filler Sentences**—Not every sentence contains a concrete and distinct idea. Some sentences go nowhere and these can clog up the graph visually. 4. **Visualization**—Force graph is a premium feature of React Flow. To mimic this intuitive design as much as possible, we added some randomness of placement; however, building a better node placement system could help declutter and prettify the graph. ## Future Directions **AI Inspiration Enhancement**—Using generative AI, it would be straightforward to add enhancement capabilities such as generating images for coherent ideas, or business plans. **Live Notes**—Eddy can be a helpful tool for transcribing and organizing meeting and lecture notes. With improvements to our summary feature, Eddy will be able to create detailed notes from a live recording of a meeting. ## Built with **UI:** React, Chakra UI, React Flow, Figma **AI:** HuggingFace, OpenAI Whisper, OpenAI GPT-3, OpenAI Embeddings, NLTK **API:** FastAPI # Supplementary Material ## Mindmap Algorithm ![Mindmap Algorithm](https://i.imgur.com/QtqeBjG.png)
## Inspiration: We thought that it takes a lot of time for teachers and professors to grade hundreds of assignments and we wanted to make it less time-consuming so that they can use this time doing something more efficient. ## What it does: We have a professor view homepage where they submit the answer key and we have a student view page where students can submit their assignments. We take both submissions and compare them together to grade them. ## How we built it: We created the design for the website using html and css. We used javascript to get information from the the student and teacher submission. We used python to translate images into text and make it easier to compare the answer key to the student submission. We used flask to connect javascript and python. ## Challenges we ran into: We didn't know how to connect the answer key and the student submission at the beginning to save it to a database. It was also challenging to transform the images into text and make sure that it's accurate and doesn't grade falsely. ## Accomplishments that we're proud of: We are proud for using many languages and connecting them together to make the final product. We were able to always discuss any challenges we faced and how we think we should approach the problem. We had the same energy and motivation that we started with. ## What we learned: We learned how to use flask and how to transform images into text using javascript. We also learned that the only way to go through this was to always discuss together what we like and what we don't like to make sure we're on the same page/ ## What's next for GradeCam We want to make this into an app into the future. With the time we had, we were only able to create the website version. An app would be more accessible to everyone. Our mission is to help as many professors as we can since they have to do so many things for hundreds of students and it can get overwhelming. We want to make GradeCam globally accessible too so that students and professor's from around the world would be able to use it.
winning
## Inspiration Patients usually have to go through multiple diagnosis before getting the right doctor. With the astonishing computational power we have today, we could use predictive analysis to suggest the patients' potential illness. ## What it does Clow takes a picture of the patient's face during registration and run it through an emotion analysis algorithm. With the "scores" that suggest the magnitude of a certain emotional trait, Clow matches these data with the final diagnosis result given by the doctor to predict illnesses. ## How we built it We integrated machine learning and emotion analysis algorithms from Microsoft Azure cloud services on our Ionic-based app to predict the trends. We "trained" our machine by pairing the "scores" of images of sick patients with its illness, allowing it to predict illnesses based on the "scores". ## Challenges we ran into All of us are new to machine learning and this has proved to be a challenge to all of us. Fortunately, Microsoft's representative was really helpful and guided us through the process. We also had a hard time writing the code to upload the image taken from the camera to a cloud server in order to run it through Microsoft's emotion analysis API, since we have to encode the image before uploading it. ## Accomplishments that we're proud of Learning a new skill over a weekend and deploy it on a working prototype ain't easy. We did that, not one but two skills, over a weekend. And it's machine learning and emotion analysis. And they are actually the main components that powers our product. ## What we learned We all came in with zero knowledge of machine learning and now we are able to walk away with a good idea of what it is. Well, at least we can visualize it now, and we are excited to work with machine learning and unleash its potential in the future. ## What's next for Clow Clow needs the support of medical clinics and hospitals in order to be deployed. As the correlation between emotion and illness is still relatively unproven, research studies have to be done in order to prove its effectiveness. It may not be effectively produce results in the beginning, but if Clow analyzes thousands of patients' emotion and illness, it can actually very accurately yield these results.
## Inspiration Care.ai was inspired by our self-conducted study involving 60 families and 23 smart devices, focusing on elderly healthcare. Over three months, despite various technologies, families preferred the simplicity of voice-activated assistants like Alexa. This preference led us to develop an intuitive, user-friendly AI healthcare chatbot tailored to everyday needs. ## What it does Care.ai, an AI healthcare chatbot, leverages custom-trained Large Language Models (LLMs) and visual recognition technology hosted on the Intel Cloud for robust processing power. These models, refined and accessible via Hugging Face, underwent further fine-tuning through MonsterAPI, enhancing their accuracy and responsiveness to medical queries. The web application, powered by the Reflex library, provides a seamless and intuitive front-end experience, making it easy for users to interact with and benefit from the chatbot's capabilities. Care.ai supports real-time data analytics and critical care necessary for humans. ## How we built it We built our AI healthcare chatbot by training LLMs and visual recognition systems on the Intel Cloud, then hosting and fine-tuning these models on Hugging Face with MonsterAPI. The chatbot's user-friendly web interface was developed using the Reflex library, creating a seamless user interaction platform. For data collection, * We researched datasets and performed literature review * We used the pre-training data for developing and fine-tuning our LLM and visual models * We collect live data readings using sensors to test against our trained models We categorized our project into three parts: * Interactive Language Models: We developed deep learning models on Intel Developer Cloud and fine-tuned our Hugging Face hosted models using MonsterAPI. We further used Reflex Library to be the face of Care.ai and create a seamless platform. * Embedded Sensor Networks: Developed our IoT sensors to track the real-time data and test our LLVMs on the captured data readings. * Compliance and Security Components: Intel Developer Cloud to extract emotions and de-identify patient's voice to be HIPAA ## Challenges we ran into Integrating new technologies posed significant challenges, including optimizing model performance on the Intel Cloud, ensuring seamless model fine-tuning via MonsterAPI and achieving intuitive user interaction through the Reflex library. Balancing technical complexity with user-friendliness and maintaining data privacy and security were among the key hurdles we navigated. ## Accomplishments that we're proud of We're proud of creating a user-centric AI healthcare chatbot that combines advanced LLMs and visual recognition hosted on the cutting-edge Intel Cloud. Successfully fine-tuning these models on Hugging Face and integrating them with a Reflex-powered interface showcases our technical achievement. Our commitment to privacy, security, and intuitive design has set a new standard in accessible home healthcare solutions. ## What we learned We learned the importance of integrating advanced AI with user-friendly interfaces for healthcare. Balancing technical innovation with accessibility, the intricacies of cloud hosting, model fine-tuning, and ensuring data privacy were key lessons in developing an effective, secure, and intuitive AI healthcare chatbot. ## What's next for care.ai Next, Care.ai is expanding its disease recognition capabilities, enhancing user interaction with natural language processing improvements, and exploring partnerships for broader deployment in healthcare systems to revolutionize home healthcare access and efficiency.
## Inspiration This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen. ## What it does It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall. ## How we built it We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident. ## Challenges we ran into Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax. To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were: 1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact). 2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact). ## Accomplishments that we're proud of We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole. ## What we learned We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process. ## What's next for VATS Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
partial
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing) ## Inspiration Without purchasing hardware, there are few ways to have contact-free interactions with your computer. To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers. ## What it does Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse. ## How we built it Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience. ## Challenges we ran into One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy. Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge. ## Accomplishments that we're proud of We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time. ## What we learned A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs. Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team. ## What's next for Puppet Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more. * Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language. * Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware. * Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above. * Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
## Inspiration Ideas for interactions from: * <http://paperprograms.org/> * <http://dynamicland.org/> but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows. ## What it does Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer. ## How I built it A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard. ## Challenges I ran into * Reliable tracking under different light conditions. * Feedback effects from projected light. * Tracking the keyboard reliably. * Hooking into macOS to control window focus ## Accomplishments that I'm proud of Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system. Cool emergent things like combining pieces of paper + the side ideas I mention below. ## What I learned Some interesting side ideas here: * Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect * Would be fun to use a deep learning thing to identify and compute with arbitrary objects ## What's next for Computertop Desk * Pointing tool (laser pointer?) * More robust CV pipeline? Machine learning? * Optimizations: run stuff on GPU, cut latency down, improve throughput * More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once
## Inspiration With the sudden move to online videoconferencing, presenters and audiences have been faced with a number of challenges. Foremost among these is a lack of engagement between presenters and the audience, which is exacerbated by a lack of gestures and body language. As first year students, we have seen this negatively impact our learning throughout both high school and our first year of University. In fact, many studies, such as [link](https://dl.acm.org/doi/abs/10.1145/2647868.2654909), emphasize the direct link between gestures and audience engagement. As such, we wanted to find a way to give presenters the opportunity to increase audience engagement through bringing natural presentations techniques to videoconferencing. ## What it does PGTCV is a Python program that allows users to move back from their camera and incorporate body language into their presentations without losing fundamental control. In its current state, the Python script uses camera information to determine whether a user needs their slides to be moved forwards or backwards. To trigger these actions, users raise their left fist to enable the program to listen for instructions. They can then swipe with their palm out to the left or to the right to trigger a forwards or backwards slide change. This process allows users to use common body language and hand gestures without accidentally triggering the controls. ## How we built it After fetching webcam data through OpenCV2, we use Google's MediaPipe library to receive a co-ordinate representation of any hands on-screen. This is then fed through a pre-trained algorithm to listen for any left-hand controlling gestures. Once a control gesture is found, we track right-hand motion gestures, and simulate the relevant keyboard input using pynput in whatever application the user is focused on. The application also creates a new virtual camera in a host Windows machine using pyvirtualcam and Unity Capture since Windows only allows one application to use any single camera device. The virtual camera can be used by any videoconferencing application. ## Challenges we ran into Inability to get IDEs working. Mac M1 chip not supporting Tensorflow. Inability to use webcam in multiple applications at once. Setting up right-hand gesture recognition with realistic thresholds. ## Accomplishments that we're proud of Successfully implementing our idea in our first hackathon. Getting a functional and relatively bug-free version of the program running with time to spare. Learning to successfully work with a number of technologies that we previously had no experience with (everything other than Python). ## What we learned A number of relevant technologies. Implementing simple computer vision algorithms. Taking code from idea to functional prototype in a limited amount of time. ## What's next for Presentation Gestures Through Computer Vision (PGTCV) A better name. Implementation of a wider range of gestures. Optimization of algorithms. Increased accuracy in detecting gestures. Implementation into existing videoconferencing applications.
winning
# AlarmAll An alternate alarm system for people with hearing loss that integrates with existing alarms. Created at McHacks 2017. ## Table of Contents * [Inside this Repository](#inside) * [Existing Alarm Systems](#existing) * [How it Works](#works) * [Engineering Process/Specs](#process) + [Initial Circuit Analysis/Simulation](#initial) + [Calculations/Schematic](#calculations1) + [Bode Plot](#bode1) + [Addition of Op-Amp](#opamp) + [Calculations/Schematic](#calculations2) + [Bode Plot](#bode2) + [Gain Corrections](#corrections) + [Full Schematic](#schematic) + [Bode Plot](#bode3) + [Circuit Construction](#circuit) + [Debugging Fun Facts](#debugging) + [All Attempted Bug Fixes](#fixes) + [% Error Table](error) ## Inside this Repository This repository structure is as follows: * `schematic` - All schematic files for this project. * `pictures` - Contains the following subfolders: + `analysis` - All pencil-and-paper circuit analysis calculations and sketches. + `pspice` - Contains the following subfolders: - `schematic-photos` - All schematic screenshots. - `bode` - All PSpice Bode plot screenshots. + `circuit` - All photos of the actual circuit. [Back to Top](#top) ## Existing Alarm Systems Most alarm systems operate at high frequencies. For people with hearing loss, their hearing loss usually starts with high frequencies, usually [after 3 kHz](http://www.noisehelp.com/high-frequency-hearing-loss.html). To prevent this, there are very bright flashing lights, but that may trigger epilepsy in some individuals. Our goal is an easy-to-implement circuit that can be used with existing high-frequency alarms to notify deaf or hard of hearing individuals of an emergency safely and effectively. [Back to Top](#top) ## How it Works The audio from the alarm goes through a high-pass filter. If audio over 3 kHz is detected, it will pass through this filter and a bright, non-flashing light will output. [Back to Top](#top) ## Engineering Process/Specs Below describes, in detail, the process of building the project. Calculations, schematics, simulation screenshots, Bode plots, circuit pictures, and Discovery Board screenshots are all included. [Back to Top](#top) ### Initial Circuit Analysis/Simulation #### Calculations/Schematic Below are our initial calculations and schematics for the high-pass filter: ![](https://res.cloudinary.com/devpost/image/fetch/s--h6EfpAEu--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/NpJeELF.jpg) [Back to Top](#top) #### Bode Plot After running PSpice on the circuit, we get the following Bode plot: ![Bode Plot](https://res.cloudinary.com/devpost/image/fetch/s--fJjOFXFh--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/pRQAaEb.png) [Back to Top](#top) ### Addition of Op-Amp However, we quickly realized that as the frequencies get higher, we will have a 0 volt output. In order to correct this, we added an opamp with a gain of 5. [Back to Top](#top) #### Calculations/Schematic Below are our calculations with a schematic: ![Op Amp Schematic](https://res.cloudinary.com/devpost/image/fetch/s--ZIHcEReS--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/prgfTyo.jpg) EDIT: The schematic is incorrect, here is the correct schematic: ![Corrected Schematic](https://res.cloudinary.com/devpost/image/fetch/s--_aiMM6ki--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/xCA3YSW.jpg) [Back to Top](#top) #### Bode Plot Below is the Bode plot with the op-amp: ![Op Amp Bode Plot](https://res.cloudinary.com/devpost/image/fetch/s--0qQkLAHv--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/qU692iu.png) [Back to Top](#top) ### Gain Corrections #### Full Schematic Using the Bode plot, we determined that the gain was too high. We then experimented with different resistor values and determined that the following setup was ideal: ![Full Schematic](https://res.cloudinary.com/devpost/image/fetch/s--9q4Hdy2y--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/mw1xjTD.png) [Back to Top](#top) #### Bode Plot With this setup, we get a near ideal Bode plot: ![Ideal Op Amp Bode Plot](https://res.cloudinary.com/devpost/image/fetch/s--3bCwPNuY--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/VnWHBVS.png) [Back to Top](#top) ### Circuit Construction Since all Bode plots and Pspice results were correct, we began to build the circuit. However, the circuit ended up not working. Through hours of tedious debugging, we determined our Discovery Board was failing and we had no other alternative (since Arduino, Pi, etc. cannot produce the analog sine wave we need). Here is our actual circuit: ![Circuit](https://res.cloudinary.com/devpost/image/fetch/s--yLgE8-Xx--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/dLGPXNU.jpg) [Back to Top](#top) ### Debugging Fun Facts #### All Attempted Bug Fixes Here is a list of actual attempted bug fixes: * First we removed the opamp and tested the circuit with a unity gain. It still wasn't working according to the unity gain Bode plot. * We tried recalculating resistor and capacitor values. Those were correct. * If we held the wire, the circuit would magically work. That doesn't happen anymore. * We tried taping everything down with masking tape to no avail. * We tried turning the Discovery Board on and off. Several times. * We stripped wires with our teeth because we suspected the provided wires were incompatible. * We changed breadboards. Twice. * We inserted the wires through a juice box straw to attempt to get voltage across a wire. It didn't work. [Back to Top](#top) #### % error Here was our % error table for the unity gain: | Frequency (Hz) | PSpice Output (dB) | Measured Output (dB) | % Error | | --- | --- | --- | --- | | 10 | -49.21 | -50.46 | 2.54% | | 100 | -29.426 | -4.61245 | 84.33% | | 1000 | -5.3652 | -0.819 | 84.73% | | 3000 | 1.58 | 0 | 100% | | 10000 | 4.1649 | n/a | n/a | Here was our % error table for the circuit with the opamp: | Frequency (Hz) | PSpice Output (dB) | Measured Output (dB) | % Error | | --- | --- | --- | --- | | 10 | -44.757 | -17.2024 | 62% | | 100 | -24.862 | -4.61245 | 81% | | 1000 | -5.3269 | 11.607 | 318% | | 3000 | 1.4862 | n/a | n/a | | 10000 | 4.1994 | n/a | n/a | [Back to Top](#top)
## Inspiration Recycling is a simple habit that goes a long way in saving the environment. Too many times have our team members stood in front of the recycling bins, with trash in hand, discussing amongst ourselves which bin each item belongs to. Many individuals, like us, struggle with recycling confusion. Therefore, we came up with Squirrel to overcome this! ## What it does Squirrel makes sorting trash more convenient (and accurate!) by eliminating any recycling ambiguity. Just point at your trash and Squirrel will tell you which bin it belongs to! It is eco-friendly and encourages recycling. ## How we built it * Real time image classification developed using Apple’s ML Kit. * Process data taken from the video stream of the device’s camera. ## Challenges we ran into Looking for clean and usable datasets of more niche items like food wrappers, mixed paper and styrofoam. ## Accomplishments that we're proud of * Creating a working MVP in just two days!! :D * Low-code no-code, fewer than 200 lines of code! ## What we learned * To use various Apple frameworks like ML Kit, SwiftUI and Vision to develop an iOS application. * Design a clean and user-friendly app interface. ## What's next for Squirrel * Expand and improve the classification model so that it classifies wastes into all four waste categories, landfill, bottles & glass, compost and mixed-paper. * Incorporate a map using MapKit that locates nearby public recycling bins. It would also point out eateries/cafés that support sustainable solutions through eco-friendly habits like the use of compostable cups and reusable shopping bags. ## Installation ``` # Clone the repo git clone https://github.com/jinweiwong/Squirrel.git # Run the app on Xcode 13 and newer. ```
## Inspiration This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen. ## What it does It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall. ## How we built it We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident. ## Challenges we ran into Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax. To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were: 1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact). 2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact). ## Accomplishments that we're proud of We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole. ## What we learned We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process. ## What's next for VATS Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
losing
# Motion's Creation: Bringing Precision to Yoga ## Inspiration While exploring potential ideas, MoveNet caught our attention with its capabilities for accurately tracking human movement. We recognized an opportunity: to provide wellness within reach for all through real-time feedback. Motion aspires to make wellness accessible for all. By breaking down multiple barriers, Motion allows new segmentations to receive personalized training feedback without the cost of hiring a personal trainer. Future updates to the application will ensure that anyone, irrespective of language, economic, age, or location status, can benefit. ## Implementation We integrated a camera on our platform that effectively captures joint movements using MoveNet. To understand and analyze these movements, we utilized TensorFlow and PyTorch in our backend. Our approach involved two primary steps: 1. **Pose Prediction:** Training a machine learning model to identify the specific pose a user attempts. 2. **Pose Correction:** Training a subsequent model to detect inaccuracies in the user's pose. If a user's pose is deemed incorrect, our system uses OpenAI's GPT API to generate unique and personalized feedback, guiding them towards the correct form. ## Challenges & Insights Gathering diverse and representative training data posed a significant challenge. Recognizing that individuals have varying arm lengths, different distances from the camera, and diverse orientations, we aimed to make our system universally applicable. Although MoveNet expertly captures joint data in diverse scenarios, our initial model's training revealed a need for broader data. This realization led us to consider the myriad ways users might interact with our application, ensuring our model had a rich learning environment.
## Inspiration It is difficult for university students to find the time and money to go to the gym. Although some YouTube videos teach exercises that can be done at home without weights, it's not always easy to self-correct without a gym buddy. ## What it does When a user works out at home, they can place their laptop camera and display at the front of their space. They carry an Arduino microcontroller in their pocket and tape a haptic motor to their wrist or side. They select from a list of exercises--so far we have implemented tricep pushups and squats--and computer vision is used to detect form errors. The haptic motor alerts the user to form errors, so they know to look at the screen for feedback. These are the implemented feedback items: TRICEP PUSHUPS: * Move wrists closer together or farther apart such that they are under the shoulders * Keep elbows tucked in through the pushup SQUATS: * Go lower * Keep knees directly above ankles, not too far forward * Sit more upright with a straight back ## How we built it We used a pretrained implementation of CMU Posenet in Tensorflow ([link](https://github.com/ildoonet/tf-pose-estimation?fbclid=IwAR1CBbW9_A3_vrwbKDmAiZJ3tQ3owjEk9NFHZ8ufRfA_QhDfOSYK-p1SYaA)) for pose estimation. We analyzed coordinates of joints in the image using our own Python functions based on expert knowledge of workout form. The vision processing feedback outputs from the laptop are interfaced to an Arduino Uno over Bluetooth connection, and the Uno controls a Grove haptic motor. ## Challenges I ran into * Diagnosing physical hardware problems: We spent a lot of time debugging a Raspberry Pi with a faulty SD card. We learned that it's important to debug from the hardware level up. * Finding usable TensorFlow models that fit well to our mission. We got a lot better at filtering usable sources and setting up command line environments. * Creating a durable and wearable design of the fitness buddy. We experienced issues with haptic motor connector wires breaking as we exercised. We learned the importance of component research in planning physical designs. ## Accomplishments that I'm proud of * Integrating Python and Arduino using a Bluetooth module to achieve haptic feedback. * Labelling joints and poses for analysis through appropriate machine learning models. * Adding analysis to machine learning outputs to make them useful in a real life context. * Learning to use different languages and products (including Raspberry Pi) to perform specific technical tasks. ## What I learned * How to use many different hardware products and techniques, including a bluetooth module, haptic motors and controllers, and a Raspberry Pi (which we did not use in our final design). We also improved our Arduino and circuit skills. * The efficiency and output derivation of many different machine learning models. * The importance of prototyping physical systems that people will interact with and could break. * A greater sense of focus towards better wellbeing of individual people through exercise. ## What's next for Fitness Buddy: Haptic Feedback on Exercise Form: * Incorporate and add software for a variety of different exercises. * Migrate to Raspberry Pi for a more portable experience. * Integrate with Google Home for more seamless IoT ("Ok Google, start my pushup routine!"). * Add goal setting and facial recognition for different household users with different goals.
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
losing
## Inspiration: What inspired the team to create this project, was the fact that many people around the world are misdiagnosed with different types of ocular diseases, which leads to patients getting improper treatments and ultimately leading to visual impairment (blindness). With the help of Theia, we can properly diagnose patients and find out whether or not they have any sort of ocular diseases. This will also help reduce the conflicts of incorrectly diagnosed patients around the world. Our eyes are an important asset to us human beings, and with the help of Theia, we can help many individuals around the world protect their eyes and have a clear vision of the world around them. Additionally, with the rise of COVID-19, leaving the house is very difficult due to the government restrictions. We wanted to reduce constant trips between optometrists and ophthalmologists with Theia due to the diagnosis being performed at the optometrists’ eye clinic, leading to fewer people in buildings and fewer gatherings. ## What it does: Theia can analyze a fundus photograph of a patient’s eye, to see if they have an ocular disease with extremely high accuracy. By uploading a picture, Theia will be able to tell if the patient is Normal or has the following diseases: Diabetic Retinthropy, glaucoma, Cataract, Age-related Macular Degeneration, Hypertension, Pathological Myopia, or Other diseases/abnormalities. Theia then returns a bar graph with the different values that the model has predicted for the image. You can then hover over the graph to see all the prediction percentages that the model returned for each image, therefore the highest value would be the condition that the patient has. Theia will allow medical practitioners to get a second opinion on a patient's condition and decide if the patient needs further evaluation rather than sending the patient to the ophthalmologist for diagnosis if they have a concern. It also allows new optometrists to guide their patients and not be unsighted to the diseases shown in the fundus photos. ## How we built it: Theia is a tool created for optometrists to identify ocular diseases directly through a web application. So how does it work? Theia’s backend framework is designed using Flask and the front end was created using plain HTML, CSS, and JavaScript. The computer vision solution was created using TensorFlow and was exported as a TensorFlow JS file to use on the browser. When an image is uploaded to Theia, the image is converted into 224 by 224 tensor matrix. When the predict button is clicked the TensorFlow model is called with its weights, and a javascript prediction promise is returned. Which is then fetched and returned to the user in a visual bar graph format. ## Challenges we ran into: We tried to create a REST API for our model by deploying the exported TensorFlow model on google cloud. But Google has a recent user interface issue when it comes to deploying models on the cloud. So we instead had to export our TensorFlow model as a TensorFlow JS file. But why would this be a problem? Because it affects our client-side performance by predicting on the client-side. If the prediction were made on the server and returned to the client it would’ve improved the performance of the web application. We also ran into other challenges when it comes to working with promises in javascript since our team had two people that were beginners and we weren’t very experienced in working with javascript. ## Accomplishments that we're proud of: We are proud of making such an accurate model in TensorFlow. We are not very experienced with deep learning and TensorFlow, so getting a machine learning model that is accurate is a big accomplishment for us. We are also proud that we created an end to end ML solution that can help people see the world in front of them clearly. With two completely new hackers on our team, we were able to expand on our skills while still teaching the beginners something new. Using Flask as our backend was new to us and learning how to integrate it into the web app and ultimately make it work was a major accomplishment but the most important thing we learned was collaboration and how to split work among group members that may be new to this world of programming and making them feel welcomed. ## What we learned: It’s surprising how much can be learned and taught in only 48 hours. We learned how to use Flask as our backend framework which was an amazing experience since we didn’t run into too many problems. We also learned how to work with javascript and how to make a complex computer vision model with TensorFlow. Our team also learned how to use TensorFlow JS as well which means that in the future we can use TensorFlow JS to make more web-based machine learning solutions. ## What's next for Theia: We provision Theia to be more scalable and reliable. We aim to deploy the model on a cloud service like google cloud or AWS and access the model from the cloud which would ultimately increase the client-side performance. We also plan on making a database of all the images the users upload, and passing those images through a data pipeline for preprocessing the images, and then saving those images from the user into a dataset for the model to train on weekly. This allows the model to be up to date and also constantly improves the accuracy of the model and reduces the bias due to the large variety of unique fundus photos of patients. Expanding the use case of Theia to other ocular diseases like Strabismus, Amblyopia, and Keratoconus is another goal which means feeding more inputs to our neural network and making it more complex.
## Inspiration Since deep learning in image processing is getting more and more important, I would like to develop an application that could assist medical staff in distinguishing ocular disease or even just give a reference for an initial judgment. ## What it does It uses deep learning (VGG19) to process images of retinal image data and distinguish the disease it has. ## How we built it Since building up a general model to make the prediction has a relatively low accuracy, I separate different diseases and build models separately, and then combine them to make an optimized ensemble model. ## Challenges we ran into The time for building models is long, and adjustment for pursuing a higher accuracy costs a lot of time. Therefore, I am not available to embed the final model since it's not well-built yet. ## Accomplishments that we're proud of I successfully made a full-stack building process on my own, which was my first time. ## What we learned I've learned how to build models by using VGG19, and how to build frontend by using HTML, CSS, and JS. ## What's Next for Ocular Disease Prediction I'm going to build the final model, and due to the dataset, some of the disease's distinguishing still have space for improvement.
## Overview According to the WHO, at least 2.2 billion people worldwide have a vision impairment or blindness. Out of these, an estimated 1 billion cases could have been prevented or have yet to be addressed. This underscores the vast number of people who lack access to necessary eye care services. Even as developers, our screens have been both our canvas and our cage. We're intimately familiar with the strain they exert on our eyes, a plight shared by millions globally. We need a **CHANGE**. What if vision care could be democratized, made accessible, and seamlessly integrated with cutting-edge technology? Introducing OPTimism. ## Inspiration The very genesis of OPTimism is rooted in empathy. Many in underserved communities lack access to quality eye care, a necessity that most of us take for granted. Coupled with the increasing screen time in today's digital age, the need for effective and accessible solutions becomes even more pressing. Our team has felt this on a personal level, providing the emotional catalyst for OPTimism. We didn't just want to create another app; we aspired to make a tangible difference. ## Core Highlights **Vision Care Chatbot:** Using advanced AI algorithms, our vision chatbot assists users in answering vital eye care questions, offering guidance and support when professional help might not be immediately accessible. **Analytics & Feedback:** Through innovative hardware integrations like posture warnings via a gyroscope and distance tracking with ultrasonic sensors, users get real-time feedback on their habits, empowering them to make healthier decisions. **Scientifically-Backed Exercises:** Grounded in research, our platform suggests eye exercises designed to alleviate strain, offering a holistic approach to vision care. **Gamified Redemption & Leaderboard System:** Users are not just passive recipients but active participants. They can earn optimism credits, leading to a gamified experience where they can redeem valuable eye care products. This not only incentivizes regular engagement but also underscores the importance of proactive vision care. The donation system using Circle allows users to make the vision care product possible. ## Technical Process Bridging the gap between the technical and the tangible was our biggest challenge. We leaned on technologies such as React, Google Cloud, Flask, Taipy, and more to build a robust frontend and backend, containerized using Docker and Kubernetes and deployed on Netlify. Arduino's integration added a layer of real-world interaction, allowing users to receive physical feedback. The vision care chatbot was a product of countless hours spent on refining algorithms to ensure accuracy and reliability. ## Tech Stack React, JavaScript, Vite, Tailwind CSS, Ant Design, Babel, NodeJS, Python, Flask, Taipy, GitHub, Docker, Kubernetes, Firebase, Google Cloud, Netlify, Circle, OpenAI **Hardware List:** Arduino, Ultrasonic sensor, smart glasses, gyroscope, LEDs, breadboard ## Challenges we ran into * Connecting the live data retrieved from the Arduino into the backend application for manipulating and converting into appropriate metrics * Circle API key not authorized * Lack of documentation for different hardwares and support for APIs. ## Summary OPTimism isn't just about employing the latest technologies; it's about leveraging them for a genuine cause. We've seamlessly merged various features, from chatbots to hardware integrations, under one cohesive platform. Our aim? Clear, healthy vision for all, irrespective of their socio-economic background. We believe OPTimism is more than just a project. It's a vision, a mission, and a commitment. We will convert the hope to light the path to a brighter, clearer future for everyone into reality.
losing
## Inspiration A week or so ago, Nyle DiMarco, the model/actor/deaf activist, visited my school and enlightened our students about how his experience as a deaf person was in shows like Dancing with the Stars (where he danced without hearing the music) and America's Next Top Model. Many people on those sets and in the cast could not use American Sign Language (ASL) to communicate with him, and so he had no idea what was going on at times. ## What it does SpeakAR listens to speech around the user and translates it into a language the user can understand. Especially for deaf users, the visual ASL translations are helpful because that is often their native language. ![Image of ASL](https://res.cloudinary.com/devpost/image/fetch/s--wWJOXt4_--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://az616578.vo.msecnd.net/files/2016/04/17/6359646757437353841666149658_asl.png) ## How we built it We utilized Unity's built-in dictation recognizer API to convert spoken word into text. Then, using a combination of gifs and 3D-modeled hands, we translated that text into ASL. The scripts were written in C#. ## Challenges we ran into The Microsoft HoloLens requires very specific development tools, and because we had never used the laptops we brought to develop for it before, we had to start setting everything up from scratch. One of our laptops could not "Build" properly at all, so all four of us were stuck using only one laptop. Our original idea of implementing facial recognition could not work due to technical challenges, so we actually had to start over with a **completely new** idea at 5:30PM on Saturday night. The time constraint as well as the technical restrictions on Unity3D reduced the number of features/quality of features we could include in the app. ## Accomplishments that we're proud of This was our first time developing with the HoloLens at a hackathon. We were completely new to using GIFs in Unity and displaying it in the Hololens. We are proud of overcoming our technical challenges and adapting the idea to suit the technology. ## What we learned Make sure you have a laptop (with the proper software installed) that's compatible with the device you want to build for. ## What's next for SpeakAR In the future, we hope to implement reverse communication to enable those fluent in ASL to sign in front of the HoloLens and automatically translate their movements into speech. We also want to include foreign language translation so it provides more value to **deaf and hearing** users. The overall quality and speed of translations can also be improved greatly.
## Overview People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak. You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read. ## How we built it We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend. For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate. ## Challenges we ran into * We had to re-train our models multiple times to get them to work well enough. * We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute ## Accomplishments that we're proud of * Using so many tools, languages and frameworks at once, and making them work together :D * submitting on time (I hope? 😬) ## What's next for SignTube * Add more signs! * Use AssemblyAI's real-time API for more streamlined communication * Incorporate account functionality + storage of videos
## Inspiration All of our team members are deeply passionate about improving students' education. We focused on the underserved community of deaf or hard-of-hearing students, who communicate, understand, and think primarily in ASL. While some of these students have become accustomed to reading English in various contexts, our market research from studies conducted by Penn State University indicates that members of the community prefer to communicate and think in ASL, and think of English writing as a second language in terms of grammatical structure and syntax. The majority of deaf people do not have what is commonly referred to as an “inner voice”; instead they often sign ASL in their heads to themselves. For this reason, deaf students are largely disadvantaged in academia, especially with regard to live attendance of lectures. As a result, we sought to design an app to translate professors’ lecture speeches to ASL in near-real time. ## What it does Our app enables enhanced live-lecture for members of the ASL-speaking community by intelligently converting the professor's speech to a sequence of ASL videos for the user to watch during lecture. This style of real-time audio to ASL conversion has never been done before, and our app bridges the educational barrier that exists in the deaf and hard-of-hearing community. ## How we built it We broke down the development of the app into 3 phases: converting voice to speech, converting speech to ASL videos, and connecting the two components together in an iOS application with an engaging user interface. Building off of existing on-device speech recognition models including Pocketsphinx, Mozilla DeepSpeech, iOS Dictation, and more, we decided to combine them in an ensemble model. We employed the Google Cloud Speech to Text API to transcribe videos for ground truth, against which we compared transcription error rates for our models by phonemes, lengths, and syllabic features. Finally, we ran our own tests to ensure that the speech-to-text API was dynamically editing previously spoken words and phrases using context of neighboring words. The ideal weights for each weight assigned to each candidate were optimized over many iterations of testing using the Weights & Biases API (along with generous amounts of freezing layers and honing in!). Through many grueling rounds and head-to-head comparisons, the iOS on-device speech recognizer shined, with its superior accuracy and performance, compared to the other two, and was assigned the highest weight by far. Based on these results, in order to improve performance, we ended up not using the other two models at all. ## Challenges we ran into When we were designing the solution architecture, we quickly discovered there was no API or database to enable conversion of written English to ASL "gloss" (or even videos). We were therefore forced to make our own database by creating and cropping videos ourselves. While time-consuming, this ensured consistent video quality as well as speed and efficiency in loading the videos on the iOS device. It also inspired our plan to crowdsource information and database video samples from users in a way that benefits all those who opt-in to the sharing system. One of the first difficulties we had was navigating the various different speech recognition model outputs and modifying it for continuous and lengthy voice samples. Furthermore, we had to ensure our algorithm dynamically adjusted history and performed backwards error correction, since some API's (especially Apple's iOS Dictation) dynamically alter past text when clued in on context from later words. All of our lexical and syntactical analysis required us to meticulously design finite state machines and data structures around the results of the models and API's we used — and required significant alteration & massaging — before they became useful for our application. This was necessary due to our ambitious goal of achieving real-time ASL delivery to users. ## Accomplishments that we're proud of As a team we were most proud of our ability to quickly learn new frameworks and use Machine Learning and Reinforcement Learning to develop an application that was scalable and modular. While we were subject to a time restriction, we ensured that our user interface was polished, and that our final app integrated several frameworks seamlessly to deliver a usable product to our target audience, *sans* bugs or errors. We pushed ourselves to learn unfamiliar skills so that our solution would be as comprehensive as we could make it. Additionally, of course, we’re proud of our ability to come together and solve a problem that could truly benefit an entire community. ## What we learned We learned how to brainstorm ideas effectively and in a team, create ideas collaboratively, and parallelize tasks for maximum efficiency. We exercised our literature research and market research skills to recognize that there was a gap we could fill in the ASL community. We also integrated ML techniques into our design and solution process, carefully selecting analysis methods to evaluate candidate options before proceeding on a rigorously defined footing. Finally, we strove to continually analyze data to inform future design decisions and train our models. ## What's next for Sign-ify We want to expand our app to be more robust and extensible. Currently, the greatest limitation of our application is the limited database of ASL words that we recorded videos for. In the future, one of our biggest priorities is to dynamically generate animation so that we will have a larger and more accurate database. We want to improve our speech to text API with more training data so that it becomes more accurate in educational settings. Publishing the app on the iOS app store will provide the most effective distribution channel and allow members of the deaf and hard-of-hearing community easy access to our app. We are very excited by the prospects of this solution and will continue to update the software to achieve our goal of enhancing the educational experience for users with auditory impairments. ## Citations: Google Cloud Platform API Penn State. "Sign language users read words and see signs simultaneously." ScienceDaily. ScienceDaily, 24 March 2011 [[www.sciencedaily.com/releases/2011/03/110322105438.htm](http://www.sciencedaily.com/releases/2011/03/110322105438.htm)].
winning
## Inspiration Homes are becoming more and more intelligent with Smart Home products such as the Amazon Echo or Google Home. However, users have limited information about the infrastructure's status. ## What it does Our smart chat bot helps users to monitor their house's state from anywhere using low cost sensors. Our product is easy to install, user friendly and fully expandable. **Easy to install** By using compact sensors, HomeScan is able to monitor information from your house. Afraid of gas leaks or leaving the heating on? HomeScan has you covered. Our product requires minimum setup and is energy efficient. In addition, since we use a small cellular IoT board to gather the data, HomeScan sensors are wifi-independant. This way, HomeScan can be placed anywhere in the house. **User Friendly** HomeScan uses Cisco Spark bots to communicate data to the users. Run diagnostics, ask for specific sensor data, our bots can do it all. Best of all, there is no need to learn command lines as our smart bots use text analysis technologies to find the perfect answer to your question. Since we are using Cisco Spark, the bots can be accessed on the go on both the Spark mobile app or on our website.Therefore, you'll have no problem accessing your data while away from your home. **Fully expandable** HomeScan was built with the future in mind. Our product will fully benefit from future technological advancements. For instance, 5G will enable HomeScan to expand and reach places that currently have a poor cellular signal. In addition, the anticipated release of Cisco Spark's "guestID" will grant access to our smart bots to an even wider audience. Newer bot customization tools will also allow us to implement additional functionalities. Lastly, HomeScan can be expanded into an infrastructure ranking system. This could have a tremendous impact on the real-estate industry as houses could be rated based on their infrastructure performances. This way, data could be used for services such as AirBnB, insurance companies and even home-owners. We are confident that HomeScan is the solution for monitoring a healthy house and improve your real-estate decisions. future proof ## How I built it The infrastructure's information are being gathered through a Particle Electron board running of cellular network. The data are then sent to an Amazon's Web Services server. Finally, a Cisco Spark chat bot retrieves the data and outputs relevant queries according to the user's inputs. The intelligent bot is also capable of warning the user in case of an emergency. ## Challenges I ran into Early on, we ran into numerous hardware issues with the Particle Electron board. After consulting with industry professionals and hours of debugging, we managed to successfully get the board working the way we wanted. Additionally, with no experience with back-end programming, we struggled a lot understanding the tools and the interactions between platforms but ended with successful results. ## Accomplishments that we are proud of We are proud to showcase a fully-stacked solution using various tools with very little to no experience with it. ## What we learned With perservance and mutual moral support, anything is possible. And never be shy to ask for help.
## Inspiration One issue that we all seem to face is interpreting large sensor-based datasets. Whether it be financial or environmental data, we saw an opportunity to use LLMs to allow for better understanding of big data. As a proof of concept, taking care of a house plant or gardens was interesting because we could collect data and take actions based on physical metrics like soil moisture and sunlight. We were then inspired to take managing plants to the next level by being able to talk to the data you just collected in a fun way, like asking how each of your plants are doing. This is how RoBotany came to be. ## What it does Through our web app, you can ask RoBotany about how your plants are doing - whether they need more water, have been overwatered, need to be in the shade, and many more questions. Each of your plants has a name, and you can ask specifically how your plant Jerry is faring for example. When you ask for a status update on your plants, our web app fetches data stored in our database, which gets a constant feed of information from the light and soil moisture sensors. If your plants are in need of water, you can even ask RoBotany to water your plants autonomously! ## How we built it **Hardware** The hardware portion uses an Arduino, a photoresistor, and a soil moisture sensor to measure the quintessential environmental conditions of any indoor or outdoor plants. We even 3D-printed a flower pot specially made to hold a small plant and the Arduino system! **Frontend** Our frontend was built with React and uses the Chat UI Kit from chatscope. **Backend** Our project requires the use of two CockroachDBs. One of the databases is continuously read and updated for the soil moisture and light level, while the other database is updated less frequently to toggle the plant sprinkler system. Our simple yet practical endpoints allow us to seamlessly send information back and forth between our arduino and AWS EC2 instance, using technologies such as pm2 and nginx to keep the API up and running. **NLP** To process user requests via our chatbot, we used a combination of a classification model on *BentoML* to categorize requests, as well as Cohere Generation for entity extraction and responding to more generic requests. The process goes as follows: 1. The user enters a prompt. 2. The prompt gets sent to be categorized via BentoML. 3. The input and category get sent to Cohere Generation, along with some training datasets, to extract entities. 4. The category and entity get sent to a small class that processes and queries our CockroachDB via a Flask mini api. 5. The response gets forwarded back to the user that sent the initial prompt. ## Challenges we ran into One of the main challenges that we struggled with was working with LLMs, something none of our team was very familiar with. Despite being extremely challenging, we were glad we dove into the subject as deep as we did because it was equally rewarding to finally get it working. In addition, given that our electronic system was handling water, we wanted to make sure that our packaging protected our ESP32 and sensor boards. We started by designing a 3D printed compartment that would house everything from the electronics, to the Motor, to the plant itself. We quickly discovered a compartment that size would take well over 12 hours (we are at T-10 hours at that point). We modified our design to make it more compact, and were able to get a beautiful packaging done in time! Finally, from CockroachDB to Cohere, our group was managing a couple different authentication systems. Between refreshing tokens, as well as group members constantly hopping on and off different components, we ran into an issue quickly in terms of how to share the tokens. The solution - was to use Syro’s secret management software. ## Accomplishments that we're proud of Our project had over a dozen unique technologies as our team looked to develop new skills and use new tech stacks during this hackathon. ## What we learned * Large Language Models (LLMs) * How to connect multiple distinct technologies together in a single project * Using a strongly-consistent database in CockroachDB for the first time * Using object-relational mapping ## What's next for RoBotany Some possible next steps include diversifying our plant sensor data, as well as making it more scalable, allowing users to potentially talk to an entire crop field! In addition, our system was designed with modularity in mind. Expanding to new, very different avenues of monitoring, shouldn’t be complex tasks. RoBotany lays the groundwork for a smart platform to talk to variable sensor data.
## Inspiration Similar to how we cannot decide for ourselves on questions such as "Where should I get food?", or "What should I do for today?". Now we can outsource our decision-making to a smart home. ## What it does Utilizing historical user input as a reference point, we have a profile on the user's likes and dislikes. When a command is issued to Google Home, tone is first analyzed with IBM Watson's NLP API, and based on the tone, a list of relevant words are used to scrape the news, with the Google News API. This is to get a current list of topics relevant to the user's preferences. Finally a decision is formulated by the multiple factors considered previously. ## How we built it We tested our code for proof of concept in Python, then translated and executed them in Node-JS, because it was universally accepted for all the platforms we used. The whole project is operated from FireBase, all inputs are sent through Watson, and all decisions are sent to the neural network for improvement on the weightings. ## Challenges we ran into Google Home had trouble connecting if it was more than 4 meters away from the WIFI router. Syntax problems when translating from Python to JS. Time complexity optimization issues for Watson API when it is not locally ran. Trouble in conversion of categorical data when trying to train the NN. ## What we learned Bring a sleeping bag. ## What's next for No Bored IPO
winning
## Inspiration Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer. ## What it does We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert. ## How we built it OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found. ## Challenges we ran into We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well! ## Accomplishments that we're proud of Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected. ## What we learned Without the proper environment, your code is useless. ## What's next for EyeSee Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people. ## Links Feel free to read more about visual impairment, and how to help; <https://w3c.github.io/low-vision-a11y-tf/requirements.html>
## Inspiration Our inspiration comes from the 217 million people in the world who have moderate to severe vision impairment, and 36 million people who are fully blind. In our modern society, with all its comforts, it is easy to forget that there are so many people who do not have the same luxuries as us. It is unthinkably difficult for these visually impaired individuals to navigate everyday life and activities. We believe that the new technology of this era presents a potential solution to this issue. ## What it does InsightAI detects the location and size of common objects in real time. This data is necessitated by our novel 3D audio spatialization algorithm, which in turn, powers our Augmented Reality audio system. This system communicates the location of said objects to the user and allows for the formulation of a mental heatmap of the world. All of this is done through just a conventional mobile smartphone and headphones. This process can be terminated simply using our intuitive haptic user experience (so that it is accessible for those with vision impairments). It also supports multiple languages in order for the project to be scalable to other countries and cultures. ## How we built it We used Tensorflow.js for the real-time object detection. It is trained on the COCO Single Shot MultiBox Detection dataset with 90 object classes and 330,000 images. We then convert the object(s) into an audio signal via a text-to-speech algorithm with natural language synthesis that supports multiple languages. We then used a custom algorithm to effectively deliver the AR audio to the user’s audio device, in such a manner, that the user can understand the location of the indicated object. In order to properly interface with the visually impaired, we focused on minimalistic and intuitive audio-first design principles to facilitate usage by the intended audience. Finally we hosted the entire web app on Zeit to allow it to be accessible to everyone. ## How does the augmented reality (AR) sound system work? The sound is outputted binaurally through the web audio API. This means that we play each headphone or earbud differently, based on the location of the object. The differentiation in the sound is determined by our algorithm. You can think of our algorithm as a program that creates an mental audio data heatmap of the world around the user. Because of this immersive system, the user can very intuitively locate objects. ## Challenges we ran into There were a multitude of bugs, which were eventually solved through discussion and collaboration. One such bug was that the audio was quite slow and did not match with the rate of object detection, because we were downloading the audio snippet from an external source for every frame. We found a solution to this problem by downloading the files locally and playing those files complementing the objects detected. Additionally, we ran into many issues pertaining to getting the tensorflow.js model to work with mobile instead of desktop. ## Accomplishments that we're proud of and what we learned We are proud that we learned how to use Tensorflow.js to recognize many objects in real time, as this was one of our first projects that used live ML, and we are very proud of how it turned out. We also learned how to use the Web Audio API and created a surround sound left and right channel system using headphones. Further, this was one of our first projects to integrate AR. ## What's next for InsightAI We will definitely be updating our project in the future to support more functionality. For example, optical character recognition and facial recognition could be used to greatly make the lives of the visually impaired in everyday life. Imagine if the blind could immediately recognize people they knew through such a system. An integrated OCR system would open up the possibility for writing unaccompanied by braille to be understood by the impaired, allowing for much easier navigation of both everyday life. Our app is very capable of scaling up to multiple different languages as well.
## Inspiration My partner and I can empathize with those that have disabilities. Because of this, we are passionate about doing our part in making the world a better place for them. This app can help blind people in navigating the world around them, making life easier and less dangerous. ## What it Does Droid Eyes can help people with a loss of sight to go through their life in a safer way. By implementing google vision and voice technology of accessible smartphones, this app will narrate the path of a person, either out loud or through headphones by preference. For example, if a blind person is approaching a red light, the app will notify them to stop until it is green. ## How We Built it **Hardware:** We first created a CAD design for a case that would hold the phone implementing the program, creating holes for the straps, speaker, and camera. This sketch was laser printed and put together via hot glue gun. As for the straps, we removed those from a reusable shopping bag to hold the case. The initial goal was to utilize a Raspi and create an entirely new product. However, we decided that a singular application will have a greater outreach. **Software:** We utilized the Android development environment in order to prototype a working application. The image recognition is done on Google’s side with Google Cloud Vision API. To communicate with the API, we used a variety of software dependencies on the Android end, such as Apache Commons and Volley. The application is capable of utilizing both WIFI and cellular data in order to be practical in most scenarios. ## Challenges We Ran Into **Hardware:** We first intended to 3D print our case, as designed on CAD. However, when exporting the file to the Makerbot Software, no details were shown of the case. After several attempts to fix this issue, we simply decided to use the same design but laser printed instead. **Software:** Uploading the pictures and identifying the objects in them was not occurring in an efficient speed. This was because the API provided for Android would only allow batch photo uploads. This feature takes more time to transfer the picture as well as forcing the server to examine sixteen photos instead of one. Also, some of the dependencies were outdated, and Android did not build the application. Getting the camera to work autonomously was another struggle we faced as well. ## Accomplishments That We’re Proud of When we entered this hackathon, this app was barely an idea. Through many hours of intense work, we created something that could hopefully change people’s lives for the better. We are very proud of this as well as what we learned personally throughout this project. ## What We Learned In terms of hardware, we learned how to laser print objects. This can be very helpful in the future when creating material that can easily be put together, to save us the time of 3D printing. For our software, we used in part google vision for the first time. This API was what identifies the elements of each picture in our application. ## What’s Next for Droid Eyes? We hope to expand upon this idea in the future, making it more widely available to other Android phones and Apple as well. By spreading the product to different devices, we hope to keep it open source so that many people can contribute by constantly improving it. We would also like to be able to 3D print a case instead of laser printing and then gluing together.
partial
## Inspiration struggling to cook and manage time ## What it does teach users how to cook new foods ## How we built it React JS, Google Cloud (Calendar, OAUTH), Edamam and HTML/CSS ## Challenges we ran into Connecting to google services and obtaining the API calls ## Accomplishments that we're proud of getting a functional website out there which accomplished our main goal ## What we learned figuring out a new website idea is difficult. Particularly, working with new authorization frameworks is very challenging ## What's next for FoodieList Connect with google services, and allow for users to save their login information Github: <https://github.com/isamumu/hackThe6/tree/master>
## Purpose: Food waste is an extensive issue touching all across the globe; in fact, according to the UN Environmente Programme, approximately ⅓ of food produced for human consumption globally is lost or wasted annually (Made in CA). After perceiving this information, we were inspired to create a website that provides you with numerous zero-waste recipes by just scanning your grocery receipt. Our innovative website not only addresses the pressing concern of food waste, but also empowers individuals to make a meaningful impact in their own kitchens! ## General Information: Interactive website that offers clients vast and meaningful alternatives to unsustainable cooking. Benefits range from the reduction of food waste to enhancing and simplifying meal planning. Our model is unique because it incorporates fast and easy-to-use technology (i.e. receipt scanning) which will provide users recipes within seconds, in comparison to traditional, tedious websites on the market that require users to manually input each ingredient, unnecessarily prolonging their stay. ## How we built it: The frontend of our project was created with HTML and CSS, whereas the backend was created with Flask. Image recognition services were implemented using Google Cloud API. We chose HTML because it is light weighted and fast to load, ensuring a splendid user interface experience. We chose CSS in view of the fact that it is time-saving due to its simple-to-use nature, as well as its striking ability to offer flexible positioning of design elements. As first-time hackers without much experience, we chose Flask in view of its simplicity and features, such as a built-in development server and quick debugger. Google Cloud API was pivotal in extracting information provided by the user because of its text recognition feature in its OCR tool, allowing us to center our model around grocery receipts. ## Challenges we ran into: Learning HTML and CSS - 2 of our group members were relatively new to coding and had no experience in frontend web dev whatsoever! Delegating tasks effectively between team members spending the night vs. going home - Constant collaboration over Discord was crucial! Learning Google Cloud API - All of our group members were new to Google Cloud API, so simultaneously learning + implementing it within 36 hours was definitely challenging. ## Accomplishments that we're proud of: As a complete beginner team, we are extremely ecstatic about our large-scale efforts and progress at Hack the 6ix! From learning web dev from scratch to experimenting with completely new frameworks to creating our personal logo to making and editing a video in under an hour, our experience has been nothing short of a rollercoaster ride. Although new to the field, we made sure to bravely tackle each challenge presented to us and give our best efforts throughout the hacking period, which can be exemplified by our choices to work long hours past midnight, ask mentors for advice when needed, and constantly improvise our front and backend for a more complete user interface experience! ## What’s next for Eco Eats: YES, we’re not done just yet! Here are a few things that we think we can consolidate with our current idea of Eco Eats to make it a cut above! A feature to take a photo of your receipt on the website A feature to let users see other recipes with similar ingredients that can be substituted with the ones they have Expand our idea to PC parts, so that we can offer clients possible ideas for custom PCs to assemble with their old receipts
## Inspiration One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually. For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste. We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates. ## What it does greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire. Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration. ## How we built it We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations. ## Challenges we ran into With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through. When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it. To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio. ## Accomplishments that we're proud of We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time. ## What we learned For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application. ## What's next for greenEats We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon. We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience. These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
losing
## Inspiration Philadelphia, like many urban cities, is grappling with rising temperatures due to climate change, industrialization, and the urban heat island effect. We noticed that extreme heat is making it unsafe for many communities, especially during summer months. Chilladelphia was inspired by the need to provide residents with real-time resources and actionable insights to help them stay cool and safe. ## What it does Help cool down Philly! The main page features a heat map that visually highlights the hottest and coolest areas around Philadelphia. By entering your address, you can instantly see how “chill” your neighborhood is. Using our computer vision algorithm, we analyze the ratio of greenery in your area, giving you a personalized chill rating. This rating helps you understand the immediate state of your environment. Chilladelphia goes beyond just information—it provides actionable suggestions like planting trees, painting rooftops lighter, and other eco-friendly tips to actively cool down your community. Plus, you can easily find nearby cooling centers, water stations, and shaded areas to help you beat the heat on the go ## How we built it We built Chilladelphia with a strong focus on user experience and seamless access to location based data. For user authentication, we integrated **Propel Auth**, which provided a quick and scalable solution for user sign-ups and logins. This allowed us to securely manage user sessions, ensuring that personal data, like location preferences, is handled safely. On the frontend, we used **React** to create a dynamic and responsive user interface. This enabled smooth interactions, from entering an address to viewing real-time temperature and air quality updates. To style the app, we utilized **Tailwind CSS**, which allowed us to rapidly prototype and design components with minimal code. **Axios** was implemented for handling API requests, efficiently fetching environmental data and user-specific suggestions. The frontend also leverages **React Router** to manage navigation, making it easy for users to explore different parts of the app. For the backend, we set up a **Node.js** server with **Express** to handle API requests and data routing. The core of our data storage is **MongoDB**, where we store geospatial information like cooling center locations and tree-planting sites. MongoDB’s flexibility allowed us to efficiently store and query data based on the user’s location. We also integrated external APIs to get coordinates and map data. To manage authentication securely across both the backend and frontend, we utilized **Propel Auth** to handle user session tokens and login states. For the data generation, we used python to compile images of university city by downloading sections of university city from sattelite images. We then use DetecTrees, a Python library that uses a pre-trained model to identify tree pixels from aerial images. We then were able to calculate what percentage of the image was green space to give users an idea of how green the area around them is. ## Challenges we ran into One of the biggest challenges was getting high resolution satellite imagery that would work well for our purposes. After testing out over 5 different APIs, we ended up having to wrap a google maps scraper, which worked best for our needs. ## Accomplishments that we're proud of We’re proud of creating a solution that can have real impact in our neighboring Philly communities. The recent heat waves in the northeast have been dangerous and put our peers and community at risk, and we are excited to take steps in the right direction to mitigate the issue. ## What we learned We've expanded our tech stack -- several of us used MongoDB, Express.js, PropelAuth, and many other tools for the first time this weekend. ## What's next for Chilladelphia Next, we plan to scale Chilladelphia by integrating more data - we had limited storage in our database and weren't able to cover as much of Philly as we wanted to, but we hope to do more in the future! We also want to partner with local governments and environmental organizations to further expand the app's resource database and promote city-wide efforts in cooling down Philadelphia.
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
## Inspiration Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants. We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us. The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term. ## What it does We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend. ## How we built it We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud. ## Challenges we ran into This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering. ## Accomplishments that we're proud of We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission. ## What we learned We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp. ## What's next for Purrtector Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet. ## Domain.com Link <https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
winning
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration Across the globe, a critical shortage of qualified teachers poses a significant challenge to education. The average student-to-teacher ratio in primary schools worldwide stands at an alarming **23:1!** In some regions of Africa, this ratio skyrockets to an astonishing **40:1**. [Research 1](https://data.worldbank.org/indicator/SE.PRM.ENRL.TC.ZS) and [Research 2](https://read.oecd-ilibrary.org/education/education-at-a-glance-2023_e13bef63-en#page11) As populations continue to explode, the demand for quality education has never been higher, yet the *supply of capable teachers is dwindling*. This results in students receiving neither the attention nor the **personalized support** they desperately need from their educators. Moreover, a staggering **20% of students** experience social anxiety when seeking help from their teachers. This anxiety can severely hinder their educational performance and overall learning experience. [Research 3](https://www.cambridge.org/core/journals/psychological-medicine/article/much-more-than-just-shyness-the-impact-of-social-anxiety-disorder-on-educational-performance-across-the-lifespan/1E0D728FDAF1049CDD77721EB84A8724) While many educational platforms leverage generative AI to offer personalized support, we envision something even more revolutionary. Introducing **TeachXR—a fully voiced, interactive, and hyper-personalized AI** teacher that allows students to engage just like they would with a real educator, all within the immersive realm of extended reality. *Imagine a world where every student has access to a dedicated tutor who can cater to their unique learning styles and needs. With TeachXR, we can transform education, making personalized learning accessible to all. Join us on this journey to revolutionize education and bridge the gap in teacher shortages!* ## What it does **Introducing TeachVR: Your Interactive XR Study Assistant** TeachVR is not just a simple voice-activated Q&A AI; it’s a **fully interactive extended reality study assistant** designed to enhance your learning experience. Here’s what it can do: * **Intuitive Interaction**: Use natural hand gestures to circle the part of a textbook page that confuses you. * **Focused Questions**: Ask specific questions about the selected text for summaries, explanations, or elaborations. * **Human-like Engagement**: Interact with TeachVR just like you would with a real person, enjoying **milliseconds response times** and a human voice powered by **Vapi.ai**. * **Multimodal Learning**: Visualize the concepts you’re asking about, aiding in deeper understanding. * **Personalized and Private**: All interactions are tailored to your unique learning style and remain completely confidential. ### How to Ask Questions: 1. **Circle the Text**: Point your finger and circle the paragraph you want to inquire about. 2. **OK Gesture**: Use the OK gesture to crop the image and submit your question. ### TeachVR's Capabilities: * **Summarization**: Gain a clear understanding of the paragraph's meaning. TeachVR captures both book pages to provide context. * **Examples**: Receive relevant examples related to the paragraph. * **Visualization**: When applicable, TeachVR can present a visual representation of the concepts discussed. * **Unlimited Queries**: Feel free to ask anything! If it’s something your teacher can answer, TeachVR can too! ### Interactive and Dynamic: TeachVR operates just like a human. You can even interrupt the AI if you feel it’s not addressing your needs effectively! ## How we built it **TeachXR: A Technological Innovation in Education** TeachXR is the culmination of advanced technologies, built on a microservice architecture. Each component focuses on delivering essential functionalities: ### 1. Gesture Detection and Image Cropping We have developed and fine-tuned a **hand gesture detection system** that reliably identifies gestures for cropping based on **MediaPipe gesture detection**. Additionally, we created a custom **bounding box cropping algorithm** to ensure that the desired paragraphs are accurately cropped by users for further Q&A. ### 2. OCR (Word Detection) Utilizing **Google AI OCR service**, we efficiently detect words within the cropped paragraphs, ensuring speed, accuracy, and stability. Given our priority on latency—especially when simulating interactions like pointing at a book—this approach aligns perfectly with our objectives. ### 3. Real-time Data Orchestration Our goal is to replicate the natural interaction between a student and a teacher as closely as possible. As mentioned, latency is critical. To facilitate the transfer of image and text data, as well as real-time streaming from the OCR service to the voiced assistant, we built a robust data flow system using the **SingleStore database**. Its powerful real-time data processing and lightning-fast queries enable us to achieve sub-1-second cropping and assistant understanding for prompt question-and-answer interactions. ### 4. Voiced Assistant To ensure a natural interaction between students and TeachXR, we leverage **Vapi**, a natural voice interaction orchestration service that enhances our feature development. By using **DeepGram** for transcription, **Google Gemini 1.5 flash model** as the AI “brain,” and **Cartesia** for a natural voice, we provide a unique and interactive experience with your virtual teacher—all within TeachXR. ## Challenges we ran into ### Challenges in Developing TeachXR Building the architecture to keep the user-cropped image in sync with the chat on the frontend posed a significant challenge. Due to the limitations of the **Meta Quest 3**, we had to run local gesture detection directly on the headset and stream the detected image to another microservice hosted in the cloud. This required us to carefully adjust the size and details of the images while deploying a hybrid model of microservices. Ultimately, we successfully navigated these challenges. Another difficulty was tuning our voiced assistant. The venue we were working in was quite loud, making background noise inevitable. We had to fine-tune several settings to ensure our assistant provided a smooth and natural interaction experience. ## Accomplishments that we're proud of ### Achievements We are proud to present a complete and functional MVP! The cropped image and all related processes occur in **under 1 second**, significantly enhancing the natural interaction between the student and **TeachVR**. ## What we learned ### Developing a Great AI Application We successfully transformed a solid idea into reality by utilizing the right tools and technologies. There are many excellent pre-built solutions available, such as **Vapi**, which has been invaluable in helping us implement a voice interface. It provides a user-friendly and intuitive experience, complete with numerous settings and plug-and-play options for transcription, models, and voice solutions. ## What's next for TeachXR We’re excited to think of the future of **TeachXR** holds even greater innovations! we’ll be considering\**adaptive learning algorithms*\* that tailor content in real-time based on each student’s progress and engagement. Additionally, we will work on integrating **multi-language support** to ensure that students from diverse backgrounds can benefit from personalized education. With these enhancements, TeachXR will not only bridge the teacher shortage gap but also empower every student to thrive, no matter where they are in the world!
## What it does Eloquent has two primary functions, both influenced by a connection between speaking and learning The first is a public speaking coach, to help people practice their speeches. Users can import a speech or opt to ad-lib — the app will then listen to the user speak. When they finish, the app will present a variety of feedback: whether or not the user talked to fast, how many filler words they used, the informality of their language, etc. The user can take this feedback and continue to practice their speech, eventually perfecting it. The second is a study tool, inspired by a philosophy that teaching promotes learning. Users can import Quizlet flashcard sets — the app then uses those flashcards to prompt the user, asking them to explain a topic or idea from the set. The app listens to the user's response, and determines whether or not the answer was satisfactory. If it was, the user can move on to the next question; but if it wasn't, the app will ask clarifying questions, leading the user towards a more complete answer. ## How we built it The main technologies we used were Swift and Houndify. Swift, of course, was used to build our iOS app and code its logic. We used Houndify to transcribe the user's speech into text. We also took advantage of Houndify's "client matches" feature to improve accuracy when listening for keywords. Much of our NLP analysis was custom-built in Swift, without a library. One feature that we used a library for, though, was keyword extraction. For this, we used a library called Reductio, which implements the TextRank algorithm in Swift. Actually, we used a fork of Reductio, since we had to make some small changes to the build-tools version of the library to make it compatible with our app. Finally, we used a lightweight HTML Parsing and Searching library called Kanna to web-scrape Quizlet data. ## Challenges we ran into I (Charlie) found it quite difficult to work on an iOS app, since I do not have a Mac. Coding in Swift without a Mac proved to be a challenge, since many powerful Swift libraries and tools are exclusive to Apple systems. This issue was partially alleviated by the decision to do most of the NLP analysis from the ground up, without an NLP library — in some cases though, coding without the ability to debug on my own machine was unavoidable. We also had some difficulties with the Houndify API, but the #houndify Slack channel proved very useful. We ended up having to use some custom animations instead of Houndify's built-in one, but in the end, we solved all functionality issues.
winning
## I wanted to do something for students. ## A student can access all the books, pdfs, mock tests for free easily and they don't to sign up/login at the page. This webpage is more focused towards Physics, Chemistry and Mathematics and its a one stop destination for students to grab basic to advanced level knowledge and then test it through mock tests and practice papers. ## I built it using HTML. ## Challenges we ran into ## Accomplishments that we're proud of ## Nothing is impossible if you have the right skill and an ambition ## I'll keep modifying it at a regular interval of time. I'll add some tasks, question of the day, pictures, explainer videos and most importantly I want to keep it free of cost.
## Initial Idea There were two ideas at first. One was to construct a Natural Language Processing (NLP) algorithm to look at successfulness of treatments in Randomized Controlled Trials (RCTs), accessible via PubMed, and the other was to analyze properties of scientific literature published within a certain time period in one type of journal to visualize information such as type of study, subject characteristics, etc. However, these ideas were not the most compatible with NLP as the categorizations were often too general. ## Inspiration for Change Roadblocks tend to spur inspiration, and that was what happened here. One of our biggest challenges was linking the API to Python, we tried to use the IBM Natural Language Understanding API at first but after many failed attempts decided to switch over to the Google Natural Language API. But after we finally managed to get it to work, the categorization of journal article data fed into the algorithm was extremely broad. Thus we decided to adjust the project to a topic that was more suited for this NLP API. Seeing as how the impact of the recent coronavirus outbreaks are still highly salient for many people's everyday lives, we decided to look at the media attitudes towards the topic with the NLP algorithm. We adjusted the NLP algorithm to look for entity sentiment instead of classification. ## Learning is the Journey In this McHacks we definitely learned a lot in the process of creating this project. We came from various fields of study with varying levels of experience. For half of us it's our first Hackathon! But within these 24 hours we managed to learn how to extract information from a website, implement a NLP algorithm with a Google API, and analyze data generated by these algorithms. Overall we learned a lot from this experience and created an exciting and socially relevant algorithm for analyzing attitudes for a topic in media.
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
losing
## Why MusicShift? When you listen to music, it belongs to you & your friends. We want to make sure you feel that way about every song. Switching aux cords, settling for lackluster playlists, or attempting to plan a playlist in advance doesn't let that happen. Through MusicShift, we make sure that the best playlist is also the most spontaneous. ## What is it? MusicShift is a plug-and-play, ever-evolving collaborative playlist in a box. Just plug in an aux cord, share a QR code with your friends, and let the best music start playing. MusicShift lets you collaborate on your playlists. You can add songs to your playlist, and even upvote songs that others have added so the more popular songs are played sooner. There is no limit to the songs you can search, and no limit to the number of people who can collaborate on a single playlist through real time multi-user sync. Playlists can have different purposes too. MusicShift is fun enough to be the music player during a carpool, and sophisticated enough to supply the music in public parks and restaurants. There's no need to worry about how your party's playlist fares when everyone is working together to pick the music. ## How it works MusicShift is made up of three parts: a hardware device, a progressive web app, and a database backend. The hardware device is a Raspberry Pi 2 which polls the backend (MongoDB database of tracks & votes used to generate rankings / play order) for the next Spotify song to play. Using Spotify’s Python bindings & taking advantage of its predictable caching locations, we intercept the downloaded streams and live route them to the aux output. Meanwhile, our progressive web app built using Polymer offers a live view into the playlist - what’s playing, what’s next, the ability to upvote/downvote songs to have them play sooner or later, and of course skip functionality (optional, configurable by the playlist creator). It loads instantly on users’ devices and presents itself as a like-native app (addable to the user lockscreen). ## What's next? Here's a look at the future of MusicShift: * User authentication, so you have complete control over your playlists * Playlist uploads through Spotify integration * Establish private and public streams for different settings and venues * NFC or Bluetooth beacon with MusicShift for easier connection
## Inspiration I've always wanted to learn to DANCE. But dance teachers cost money and I'm a bad dancer :( We made DanceBuddy so we could learn to dance :) ## What it does It breaks down poses from a dancing video into a step-by-step tutorial. Once you hit a pose, it will move on. We break down the movements for you into key joint movements using PoseNet. With a novel cost function developed using Umeyama's research, your dance moves are graded. ## How we built it Python, PoseNet, opencv ## Challenges we ran into The cost function was really hard to make. Initially it was too harsh and then it was too generous. ## Accomplishments that I'm proud of Making the best dance tutor in the world. ## What we learned Learned a lot about machine learning ## What's next for DanceBuddy Hopefully we can use it with a couple of our friends
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music. Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light! How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy. Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like. So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
partial
## Inspiration A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach. #### PillAR is your personal augmented reality pill/medicine tracker. It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing. We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines. ## How we built it We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app. ## Accomplishments that we're proud of This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand. ## What's next for PillAR In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier.
## Inspiration The opioid crisis is a widespread danger, affecting millions of Americans every year. In 2016 alone, 2.1 million people had an opioid use disorder, resulting in over 40,000 deaths. After researching what had been done to tackle this problem, we came upon many pill dispensers currently on the market. However, we failed to see how they addressed the core of the problem - most were simply reminder systems with no way to regulate the quantity of medication being taken, ineffective to prevent drug overdose. As for the secure solutions, they cost somewhere between $200 to $600, well out of most people’s price ranges. Thus, we set out to prototype our own secure, simple, affordable, and end-to-end pipeline to address this problem, developing a robust medication reminder and dispensing system that not only makes it easy to follow the doctor’s orders, but also difficult to disobey. ## What it does This product has three components: the web app, the mobile app, and the physical device. The web end is built for doctors to register patients, easily schedule dates and timing for their medications, and specify the medication name and dosage. Any changes the doctor makes are automatically synced with the patient’s mobile app. Through the app, patients can view their prescriptions and contact their doctor with the touch of one button, and they are instantly notified when they are due for prescriptions. Once they click on on an unlocked medication, the app communicates with LocPill to dispense the precise dosage. LocPill uses a system of gears and motors to do so, and it remains locked to prevent the patient from attempting to open the box to gain access to more medication than in the dosage; however, doctors and pharmacists will be able to open the box. ## How we built it The LocPill prototype was designed on Rhino and 3-D printed. Each of the gears in the system was laser cut, and the gears were connected to a servo that was controlled by an Adafruit Bluefruit BLE Arduino programmed in C. The web end was coded in HTML, CSS, Javascript, and PHP. The iOS app was coded in Swift using Xcode with mainly the UIKit framework with the help of the LBTA cocoa pod. Both front ends were supported using a Firebase backend database and email:password authentication. ## Challenges we ran into Nothing is gained without a challenge; many of the skills this project required were things we had little to no experience with. From the modeling in the RP lab to the back end communication between our website and app, everything was a new challenge with a lesson to be gained from. During the final hours of the last day, while assembling our final product, we mistakenly positioned a gear in the incorrect area. Unfortunately, by the time we realized this, the super glue holding the gear in place had dried. Hence began our 4am trip to Fresh Grocer Sunday morning to acquire acetone, an active ingredient in nail polish remover. Although we returned drenched and shivering after running back in shorts and flip-flops during a storm, the satisfaction we felt upon seeing our final project correctly assembled was unmatched. ## Accomplishments that we're proud of Our team is most proud of successfully creating and prototyping an object with the potential for positive social impact. Within a very short time, we accomplished much of our ambitious goal: to build a project that spanned 4 platforms over the course of two days: two front ends (mobile and web), a backend, and a physical mechanism. In terms of just codebase, the iOS app has over 2600 lines of code, and in total, we assembled around 5k lines of code. We completed and printed a prototype of our design and tested it with actual motors, confirming that our design’s specs were accurate as per the initial model. ## What we learned Working on LocPill at PennApps gave us a unique chance to learn by doing. Laser cutting, Solidworks Design, 3D printing, setting up Arduino/iOS bluetooth connections, Arduino coding, database matching between front ends: these are just the tip of the iceberg in terms of the skills we picked up during the last 36 hours by diving into challenges rather than relying on a textbook or being formally taught concepts. While the skills we picked up were extremely valuable, our ultimate takeaway from this project is the confidence that we could pave the path in front of us even if we couldn’t always see the light ahead. ## What's next for LocPill While we built a successful prototype during PennApps, we hope to formalize our design further before taking the idea to the Rothberg Catalyzer in October, where we plan to launch this product. During the first half of 2019, we plan to submit this product at more entrepreneurship competitions and reach out to healthcare organizations. During the second half of 2019, we plan to raise VC funding and acquire our first deals with healthcare providers. In short, this idea only begins at PennApps; it has a long future ahead of it.
## Inspiration As an avid gift giver, I wanted to share my expertise with others who might struggle with remembering or coming up with a gift idea ## What it does giftie lets you organize any number of gift ideas for your friends and family. Whenever inspiration strikes, you can use giftie to store your ideas to come back to when that special someone's birthday comes up ;) Using the ideas you jot down, giftie also makes recommendations for gift ideas. It takes into consideration the gift ideas you have put down as well as the relationship you have with the recipient. ## How we built it Frontend: JavaScript, HTML, CSS Backend: Firebase (Database, Cloud Storage, Authentication), Flask, Python, Cohere ## What's next for giftie Additional features that could be integrated in the future include reminders based on a gift recipients important dates (birthday, holidays, etc). Additional gift idea details could also be added. For example improving the UI to include purchase links for each idea
winning
## City Bins Roamer : An AI multiplayer game for sustainable cities! # What it does We're using Martello's geospatial data to make a Pac-Man-like game taking place across the playing board of Montreal's streets. The goal: if you play as 'garbage': try to escape from the Intelligent System of Bins!! If you play as 'bins' (audience) : try to collect the garbage! The idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is the player's goal. Try to reach it before the bins 'eat' you! :open\_mouth: But the garbage also has to avoid the place on the map that is the Audience's goal! The goal of the Audience is to prevent Player from reaching his goal. By placing bins, they try to push player towards the Audience's goal. # How we built it + Implementation With Python, Javascript, json, CSV, Bing Maps and a lot of frustration. Because bins' signals are sometimes weak and noisy, we are using Martello's database which is useful in decision making (which provider should we trust more locally, and how much should we trust signal and how much our previous knowledge (somehow similar to AI concepts like particle filters etc.) By Rest API we could retrieve information about the city's map structure which is passed to pygame framework. All algorithms (navigation, AI's game style) are implemented from scratch. Therefore: Microsoft Bing Maps (+Rest API) + Python pygame + Flask + AI # Accomplishments that we're proud of Parsing the Json file, being able to understand and analyze its data, and map it to Bing Maps, first touch with JS, Flask. Best multiplayer with AI game ever! City's bins Roamer : multiplayer game with AI for sustainable cities! Goal: if you play as 'garbage' try to escape from Intelligent System of Bins!!If you play as 'bins' (audience) try to collect garbage!Idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is player's goal. They try to reach it before bins 'eat' you. But also, it has to avoid audience's goal! Goal of audience is to prevent Player to reach player's goal. By placing bins they try to push player towards audience's goal. For the future: Combine everything: * combine ability to make decision (about the signal) with navigation's algorithms. * combine Bing Maps with pygame (style, retrieve data from map to have streets' layout etc.) * combine by Flask data from Martello to Bing Maps so that they can contain information about signals' strentgh.
## Inspiration Frustrating and intimidating banking experience leads to loss of customers and we wanted to change that making banking fun and entertaining. Specifically, senior citizens find it harder to navigate online bank profiles and know their financial status. We decided to come up with an android app that lets you completely control your bank profile either using your voice or the chat feature. Easily integrate our app into your slack account and chat seamlessly. ## What it does Vocalz allows you to control your online bank profile easily using either chat or voice features. Easily do all basic bank processes like sending money, ordering a credit card, knowing balances and so much more just using few voices or text commands. Unlike our competitors, we give personalized chat experience for our customers. In addition, Vocalz also recommends products from the bank they use according to their financial status as well as determine eligibility for loans. The future of banking is digital and we thrive to make the world better and convenient. Slack integration makes it convenient for working professionals to easily access bank data within slack itself. Join the workspace and use @ to call our Vocalzapp. Experience the next generation of banking directly from your slack account. <https://join.slack.com/t/vocalzzz/shared_invite/enQtOTE0NTI3ODg2NjMxLTdmMWVjODc1YWMwNWQ0ZjI2MDJkODAyYzI2YTZiMmEzYjA3NmExYzZlNjM5Yzg0NGVjY2VlYjE5OGJhNGFmZTM> Current Features Know balance Pay bills Get customized product information from respective banks Order credit cards/financial products Open banking accounts Transaction history You can use either voice or chat features depending on your privacy needs. ## How we built it We used Plaid API to get financial data from any bank in the world and we integrated it within our android app. After logging in securely using your bank credentials, Vocalz automatically customizes your voice-enabled and chat features according to the data provided by the bank. In our real product, We trained the IBM Watson chatbot with hundreds of bank terminology and used Dialogflow to create a seamless conversational experience for the customers. IBM Watson uses machine learning to understand the customer's needs and then responds accordingly regardless of spelling or grammar errors. For voice-enabled chat, we will use google's speech-to-text API which sends the information to IBM Watson and Google text-audio API will return the response as audio. The app will be deployed in the Google Cloud because of its high-security features. For demo purposes and time constraints, we used Voiceflow to demonstrate how our voice-enabled features work. ## Challenges we ran into Getting to know and learn the IBM Watson environment was very challenging for us as we don't have much experience in machine learning or dialogue flow. We also needed to find and research different API's required for our project. Training IBM Watson with specific and accurate words was very time consuming and we are proud of its present personalized features. ## Accomplishments that we're proud of We ran into several challenges and we made sure we are on the right path. We wanted to make a difference in the world and we believe we did it. ## What we learned We learned how to make custom chatbots and bring customized experience based on the app's needs. We learned different skills related to API's, android studio, machine learning within 36 hours of hacking. ## What's next for Vocalz RBC Further training of our chatbot with more words making the app useful in different situations. Notifications for banking-related deadlines, transactions Create a personalized budget Comparing different financial products and giving proper suggestions and recommendations. Integrate VR/AR customer service experience
# Catch! (Around the World) ## Our Inspiration Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing? ## What it does Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch. ## How we built it For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server. ## Challenges we ran into Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon. This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve. There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code. ## Accomplishments * Working Unity application with AR * Use of EchoAR and integrating with our application * Learning how to use Firebase * Creating a working chat application between multiple users
winning
## Inspiration This year is our last year at Hack the North. Ever since 2020++, we have competed as hackers, making all kinds of projects from NFT scanners to mind-controlled cars. This year, we wanted to celebrate the unsung heroes. The ones left forgotten even though they're the backbone of society itself. We've all watched the movie WALL-E, but how many of us remember M-O, the robot who sacrificed everything for his beliefs and morals? Who is praising him for staying steadfast in his pursuit of cleanliness? We present **DEREK**, the Dynamic Efficient Robotic Expert for Kleaning, the lone robot causing the downfall of the Roomba itself. ## What it does DEREK successfully identifies contaminants and dirt particles, and shows them that it is no match for a of the robot the likes of it. It is a fully autonomous system that effectively removes contaminants from your home (or spaceship) and keeps you and your family safe. But you don't want a boring robot, do you? DEREK adds to the overall cleaning experience with the utmost sass. Whether it be snarky comments or lively facial expressions, DEREK has it all. ## How we built it DEREK is a Raspberry Pi 4 at heart. Supporting that board and the peripherals is Viam (a tool we learned about at our last HTN!). This made startup fast but also introduced complexities into our project. On the RPi, we have Python running our emotion engine, sassy response generator, and all our control elements. We use Cohere to create snarky, sassy - and at times - angry responses to seeing so much dirt to clean up. The body is a multicolour of 3D prints, laser cut parts, and last-minute duct tape patches. Animated emotions are displayed on an ipad and changed to fit the spoken word. ## Cohere Cohere was awesome! We were able to run sentiment analysis and generation with their generative text APIs and then correspond that with emotions and animations we created to fit the scene. This helped us to create a more "human" like-robot and give DEREK some personality! ## Challenges we ran into We seemed to run into trouble wherever we went. From the simplest of problems to much more complex ones; we probably had every problem possible during our hack this year. We had power issues, motor shearing problems, infinite code bugs, and mechnical fit issues. But, each time that we did, we were able to solve it or pivot to a different solution. ## Accomplishments that we're proud of We're proud of keeping up with the fun spirit of everything and pushing through all the challenges we ran into. We are especially proud of each other for the past years of school, friendship, and competitions. As it's our last year at UWaterloo, we wanted to make something fun, challenging, but more importantly, something we can do together. ## What we learned 1. Test components early! Save yourself a lot of time by doing research anf giuring out what will work - if your components are even useable! 2. Have lots of fun while hacking. We picked a fun project so we could make others and ourselves laugh 3. Sponsors are reaaaaally cool. We kept going up to the sponsor bay to ask sponsors for technical help and the way that they instantly knew what to do was amazing. ## What's next for Derek * Backflips (for sure) * Self-balancing
## Inspiration To improve every day productivity, to provide a source of reliable and immediate information, and to be readily available for all walks of life. This app is targeted to help the people who don't have reliable access to educational texts or resources, to the people who want to add more productivity into their lives, and to flex the current abilities of Artificial Intelligence. ## What it does The app takes voice input from the user, processes it through various APIs and AIs to create relevant output and speaks back to the user with useful information, and potentially code that the user asks for. It can also control an arduino and adjust smart home devices. ## How we built it We built this application by utilizing the built in voice recognition and text to speech software in android using react-native, and integrating it with data scraped from chatgpt to enable a complete AI voice assistant experience. Additionally, this AI also has custom commands that allow it to send and receive messages from various IOT devices such as connected laptops or lights, and also provide the user with up-to-date live information such as weather. ## Challenges we ran into The biggest challenge we faced when creating this application was the lack of an official chatgpt api, as a result we had to use various workarounds and web scrapping using puppeteer to successfully send and receive data from chatgpt. ## Accomplishments that we're proud of * We were able to complete all the milestones we set for the project within the hackathon time limit. * We were able to extensively test it various realistic scenarios successfully. These scenarios include asking information about a specific topic, asking for code for a specific problem(eg. Fizz Buzz) and copy pasting the code to any device connected to the network, interfacing with home devices such as turning on/off lights. * We were also able to make intuitive interface so that future developers can easily integrate their applications into our system ## What we learned We learned about the value of a well-organized plan to tackle various different parts of the code, and the communication and teamwork required to co-develop a project of this scale. ## What's next for VANCE - IOT Integrated AI Assistant 1. NLP to generalize custom commands rather than having hard coded phrases 2. We want to be able to support more complex scenarios, reduce latency for responses from the bot. 3. Add in ability to receive code from connected devices 4. Improve the UI/UX design to be more familiar and easily usable.
## Inspiration Inspired by personal experience of commonly getting separated in groups and knowing how inconvenient and sometimes dangerous it can be, we aimed to create an application that kept people together. We were inspired by how interlinked and connected we are today by our devices and sought to address social issues while using the advancements in decentralized compute and communication. We also wanted to build a user experience that is unique and can be built upon with further iterations and implementations. ## What it does Huddle employs mesh networking capability to maintain a decentralized network among a small group of people, but can be scaled to many users. By having a mesh network of mobile devices, Huddle manages the proximity of its users. When a user is disconnected, Huddle notifies all of the devices on its network, thereby raising awareness, should someone lose their way. The best use-case for Huddle is in remote areas where cell-phone signals are unreliable and managing a group can be cumbersome. In a hiking scenario, should a unlucky hiker choose the wrong path or be left behind, Huddle will reduce risks and keep the team together. ## How we built it Huddle is an Android app built with the RightMesh API. With many cups of coffee, teamwork, brainstorming, help from mentors, team-building exercises, and hours in front of a screen, we produced our first Android app. ## Challenges we ran into Like most hackathons, our first challenge was deciding on an idea to proceed with. We employed the use of various collaborative and brainstorming techniques, approached various mentors for their input, and eventually we decided on this scalable idea. As mentioned, none of us developed an Android environment before, so we had a large learning curve to get our environment set-up, developing small applications, and eventually building the app you see today. ## Accomplishments that we're proud of One of our goals was to be able to develop a completed product at the end. Nothing feels better than writing this paragraph after nearly 24 hours of non-stop hacking. Once again, developing a rather complete Android app without any developer experience was a monumental achievement for us. Learning and stumbling as we go in a hackathon was a unique experience and we are really happy we attended this event, no matter how sleepy this post may seem. ## What we learned One of the ideas that we gained through this process was organizing and running a rather tightly-knit developing cycle. We gained many skills in both user experience, learning how the Android environment works, and how we make ourselves and our product adaptable to change. Many design changes occured, and it was great to see that changes were still what we wanted and what we wanted to develop. Aside from the desk experience, we also saw many ideas from other people, different ways of tackling similar problems, and we hope to build upon these ideas in the future. ## What's next for Huddle We would like to build upon Huddle and explore different ways of using the mesh networking technology to bring people together in meaningful ways, such as social games, getting to know new people close by, and facilitating unique ways of tackling old problems without centralized internet and compute. Also V2.
losing
## **What is Cointree?** Cointree is a platform where users get paid to go green. Because living more sustainably shouldn't be more expensive. In fact, we should be rewarded for living sustainably – and that's exactly what Cointree does. Cointree connects companies looking to offset carbon emissions with users looking to live a more sustainable life. ## **How does Cointree accomplish this?** More and more companies want to become carbon neutral. Carbon offsets are a means for companies to become carbon neutral even if they still have to emit carbon dioxide in the air – by paying a third party to remove or not emit carbon dioxide by means such as reducing driving pollution, cutting down less trees, or building wind farms. But as these third parties have nearly quadrupled in size in just the past two years, debates have arised about the effectiveness and value which these carbon offsetting companies really provide. Cointree takes a drastically different approach, instead connecting individual people to these companies who are willing to pay carbon offsets. Cointree accomplishes this by having two different clients: an iOS app, and a web client. The web client is for the companies paying carbon offsets, who can sign in, deposit currency, and view the progress on their carbon offset goals. In the process, we take a small cut out of the companie's deposit. Meanwhile users install our Cointree iOS app. There they can announce that they, say, installed solar panels, or bought an electric vehicle, or even planted a tree. Then they demonstrate proof of completion (by scanning an invoice for instance), and they get paid. Simple as that. You might be wondering, how exactly do we connect the two, and more importantly how do we store data in a safe, efficient, and accountable system? The answer, ***blockchain***. ## **What is unique about Cointree?** At Cointree, all of our data is on the blockchain. And to us, that’s really important. We want the radical transparency that blockchain offers – it means that anyone can see what carbon offsets companies are paying, and keep them accountable. Indeed, the web client also acts as a log where anyone can see all the carbon offsets that a certain company bought. Real transparency. We use Polygon's MATIC currency and Ethereum platform in order to develop a system where companies deposit MATIC into a smart contract that functions almost like a vault. When users demonstrate proof of completion of a certain task, we send money to their wallet (as a function of how much CO2 they removed / won't put into the atmosphere thanks to their task). Thanks to the speed and security of Polygon, we offer a really great experience here. Check out our video for a deep-dive into how Cointree works on the blockchain. There's some pretty novel stuff in there (also check out our attached slides). ## Challenges we ran into The biggest challenge was interfacing with the blockchain from a native iOS app. It's nearly impossible – blockchain is almost exclusively made for the web. But we didn't want to ditch using an iOS app though, since we wanted the smoothest possible experience for the end user. So instead we had to come up with clever work arounds to offload any interfacing done with the blockchain to our express.js backend. ## Accomplishments that we're proud of We're really proud of the range of things we were able to make – from an iOS client to a web client, from smart contracts to REST APIs. All of our past experience as developers across our whole (short) lives came into use here. ## Want to view the source code? [Cointree iOS App](https://github.com/nikitamounier/Cointree-iOS) [Cointree Smart Contracts & REST API](https://github.com/sidereior/cointree-smartcontract) [Cointree web client](https://github.com/jmurphy5613/cointree-web) [Cointree backend](https://github.com/jmurphy5613/cointree-backend) ## What's next for Cointree Expanding to new sustainable projects (planting of trees and growth of them, using public transport, etc.), third party company verification of invoices & receipts (these companies will check with their own databases to verify that invoices are not fraudulent), providing uses for sustainable companies or retailers to benefit (companies that sell products which we offer payment for--for example electric cars--can give a percent discount and can better reach their market segment), improvement of security with Vault Smart Contract and communication between Vault Smart Contract and IOS app, rework of NFT minting process and rather than minting NFT's which are expensive we can have a Parent Smart Contract and make children smart contracts for each company and use data in these to verify proofs of transactions without the cost.
## Inspiration We are all a fan of Carrot, the app that rewards you for walking around. This gamification and point system of Carrot really contributed to its success. Carrot targeted the sedentary and unhealthy lives we were leading and tried to fix that. So why can't we fix our habit of polluting and creating greenhouse footprints using the same method? That's where Karbon comes in! ## What it does Karbon gamifies how much you can reduce your CO₂ emissions by in a day. The more you reduce your carbon footprint, the more points you can earn. Users can then redeem these points at Eco-Friendly partners to either get discounts or buy items completely for free. ## How we built it The app is created using Swift and Swift UI for the user interface. We also used HTML, CSS and Javascript to make a web app that shows the information as well. ## Challenges we ran into Initially, when coming up with the idea and the economy of the app, we had difficulty modelling how points would be distributed by activity. Additionally, coming up with methods to track CO₂ emissions conveniently became an integral challenge to ensure a clean and effective user interface. As for technicalities, cleaning up the UI was a big issue as a lot of our time went into creating the app as we did not have much experience with the language. ## Accomplishments that we're proud of Displaying the data using graphs Implementing animated graphs ## What we learned * Using animation in Swift * Making Swift apps * Making dynamic lists * Debugging unexpected bugs ## What's next for Karbon A fully functional Web app along with proper back and forth integration with the app.
## Introduction Introducing **NFTree**, the innovative new platform that allows users to take control of their carbon footprint. On NFTree, you can purchase NFTs of a piece of a forest. Each NFT represents a real piece of land that will be preserved and protected, offsetting your carbon emissions. Not only are you making a positive impact on the environment, but you also get to own a piece of nature and leave a lasting legacy. ## Inspiration We have always been passionate about environmental sustainability. We've seen the effects of climate change on the planet and knew we wanted to make a difference. We found that corporations attempts at achieving "carbon neutrality" by offsetting there output by purchasing planted trees from third party companies frustrating. What happens to those trees? What if they are cut down? What if we could use the blockchain to give people the opportunity to own a piece of protected land, and in doing so, offset their carbon emissions? We hope that NFTree can not only make a positive impact on the environment, but also provide a unique and meaningful way for people to connect with nature and leave a lasting legacy. ## What it does NFTree utilizes the blockchain and non fungible tokens to give people the opportunity to own a piece of a protected forest and offset their carbon emissions. The process starts with the opportunity for individuals and corporations to purchase and protect land through government agencies across the world. After this, the purchaser can sell off parts of the land, offering a permanently protected piece of Forrest. When a user wants to buy a piece of a forest, they can browse through the marketplace of available forest lots. The marketplace is filterable by forest grade, with grade A being the highest quality and F being the worst. The user can choose the forest lot that they want to purchase and use the cryptocurrency HBAR to make the transaction. Once the transaction is complete, the user officially owns the NFT representing that piece of land. They can view and manage their ownership on the website, and can also see the specific location and coordinates of their forest lot on a map. In addition to buying a piece of a forest, users can also sell their NFTs on the marketplace. They can set their own price in HBAR and put their forest lot up for sale. Other users can then purchase the NFT from them, becoming the new owner of that piece of land. The NFTs on NFTree are unique, scarce, and verifiable, and their ownership is recorded on the blockchain, providing transparency and security for all transactions. The ownership of the forest land is also recorded on the blockchain, and all the transaction fees are used to protect the land and preserve it for the future. The team behind NFTree is committed to making a positive impact on the environment and connecting people with nature. NFTree offers a new way to offset carbon emissions and leave a lasting legacy, while also providing a unique investment opportunity. ## Market Trends The market trends that will help NFTree succeed are multifaceted and include the growing interest in NFTs, the increasing awareness and concern about climate change, and the desire for unique and meaningful investments. First and foremost, the NFT market is rapidly growing and gaining mainstream attention. This is driven by the increasing adoption of blockchain technology, which allows for the creation of unique digital assets that can be bought and sold like physical assets. NFTs have already been successful in the art, music, and gaming industries, and now, it's time for the environmental and sustainable sector to benefit from it. Secondly, the issue of climate change is becoming more pressing and is top of mind for many individuals and organizations. People are looking for ways to make a positive impact on the environment and are increasingly considering investments that align with their values. NFTree offers an opportunity to do just that, by allowing individuals to own a piece of a forest, which not only helps to combat climate change by supporting reforestation efforts, but also, it becomes a carbon decreasing asset. Lastly, people are looking for unique and meaningful investments that go beyond traditional stocks and bonds. NFTree offers a unique investment opportunity that not only has the potential for financial gain, but also has a tangible and emotional connection to nature. As people become more interested in sustainable and environmentally friendly products, NFTree stands to benefit from this trend as well. In summary, NFTree is well-positioned to succeed in the current market due to the growing interest in NFTs, the increasing awareness and concern about climate change, and the desire for unique and meaningful investments. NFTree is a one-of-a-kind opportunity to own a piece of nature and make a positive impact on the environment while also getting a financial return. ## Technical Aspects We wrote our backend server in Kotlin using Ktor as our rest framework and Ebeans ORM with a Postgresql database. We used Hedera, a open source public ledger to build the NFT aspect, facilitating transfers and minting. On the frontend, we used React as well as Firebase for user authentication. Functionality includes creating, viewing, agreeing to transfer and buying the NFTs. At registration, at 12 part mnemonic passphrase is provided to the user and needs to be remembered as it is required for any transfers. The currency used for transfers is HBar, the native currency used by the Hedera chain. ## Challenges we ran into We have faced a number of challenges while building our platform. One of the biggest challenges we faced was figuring out how to properly use Hedera, the blockchain technology we chose to use for our platform. It was a new technology for us and we had to spend a lot of time learning how it worked and how to properly implement it into our platform. We also encountered challenges in terms of interoperability and scalability, as we needed to ensure that our platform could easily integrate with other systems and handle a large volume of transactions. ## What we learned We have learned a great deal throughout the process of building our platform. One of the most important things we learned is the importance of flexibility and adaptability. The world of blockchain technology and NFTs is constantly changing and evolving, and we had to be willing to adapt and pivot as needed in order to stay ahead of the curve. We also learned the importance of user experience and customer satisfaction. We had to put ourselves in the shoes of our customers, understand their needs and wants, and build the platform in a way that caters to them. We had to make sure that the platform was easy to use, reliable, and secure for all of our customers. Finally, we learned about the power of blockchain technology and how it can be used to create a more sustainable future. We were inspired by the potential of NFTs to transform the way we own and invest in natural resources, and we believe that NFTree can play a key role in making this happen. Overall, building NFTree has been a valuable learning experience for us, and we are excited to continue working on the platform and to see where it will take us in the future. ## What's next for NFTree We are excited to see the success of our platform and the positive impact it has had on the environment. In the future, we plan to expand the types of land that can be represented by NFTs on our platform. We also plan to work with more organizations that are involved in land conservation and reforestation, to increase the impact of NFTree. Additionally, we want to explore new use cases for NFTs, such as creating virtual reality experiences that allow users to explore and interact with their forest lots in a more immersive way. We are dedicated to making NFTree the go-to platform for environmental conservation and sustainable investing.
winning
## Inspiration The inspiration behind our innovative personal desk assistant was ignited by the fond memories of Furbys, those enchanting electronic companions that captivated children's hearts in the 2000s. These delightful toys, resembling a charming blend of an owl and a hamster, held an irresistible appeal, becoming the coveted must-haves for countless celebrations, such as Christmas or birthdays. The moment we learned that the theme centered around nostalgia, our minds instinctively gravitated toward the cherished toys of our youth, and Furbys became the perfect representation of that cherished era. Why Furbys? Beyond their undeniable cuteness, these interactive marvels served as more than just toys; they were companions, each one embodying the essence of a cherished childhood friend. Thinking back to those special childhood moments, the idea for our personal desk assistant was sparked. Imagine it as a trip down memory lane to the days of playful joy and the magic of having an imaginary friend. It reflects the real bonds many of us formed during our younger years. Our goal is to bring the spirit of those adored Furbies into a modern, interactive personal assistant—a treasured piece from the past redesigned for today, capturing the memories that shaped our childhoods. ## What it does Our project is more than just a nostalgic memory; it's a practical and interactive personal assistant designed to enhance daily life. Using facial recognition, the assistant detects the user's emotions and plays mood-appropriate songs, drawing from a range of childhood favorites, such as tunes from the renowned Kidz Bop musical group. With speech-to-text and text-to-speech capabilities, communication is seamless. The Furby-like body of the assistant dynamically moves to follow the user's face, creating an engaging and responsive interaction. Adding a touch of realism, the assistant engages in conversation and tells jokes to bring moments of joy. The integration of a dashboard website with the Furby enhances accessibility and control. Utilizing a chatbot that can efficiently handle tasks, ensuring a streamlined and personalized experience. Moreover, incorporating home security features adds an extra layer of practicality, making our personal desk assistant a comprehensive and essential addition to modern living. ## How we built it Following extensive planning to outline the implementation of Furby's functions, our team seamlessly transitioned into the execution phase. The incorporation of Cohere's AI platform facilitated the development of a chatbot for our dashboard, enhancing user interaction. To infuse a playful element, ChatGBT was employed for animated jokes and interactive conversations, creating a lighthearted and toy-like atmosphere. Enabling the program to play music based on user emotions necessitated the integration of the Spotify API. Google's speech-to-text was chosen for its cost-effectiveness and exceptional accuracy, ensuring precise results when capturing user input. Given the project's hardware nature, various physical components such as microcontrollers, servos, cameras, speakers, and an Arduino were strategically employed. These elements served to make the Furby more lifelike and interactive, contributing to an enhanced and smoother user experience. The meticulous planning and thoughtful execution resulted in a program that seamlessly integrates diverse functionalities for an engaging and cohesive outcome. ## Challenges we ran into During the development of our project, we encountered several challenges that required demanding problem-solving skills. A significant hurdle was establishing a seamless connection between the hardware and software components, ensuring the smooth integration of various functionalities for the intended outcome. This demanded a careful balance to guarantee that each feature worked harmoniously with others. Additionally, the creation of a website to display the Furby dashboard brought its own set of challenges, as we strived to ensure it not only functioned flawlessly but also adhered to the desired aesthetic. Overcoming these obstacles required a combination of technical expertise, attention to detail, and a commitment to delivering a cohesive and visually appealing user experience. ## Accomplishments that we're proud of While embarking on numerous software projects, both in an academic setting and during our personal endeavors, we've consistently taken pride in various aspects of our work. However, the development of our personal assistant stands out as a transformative experience, pushing us to explore new techniques and skills. Venturing into unfamiliar territory, we successfully integrated Spotify to play songs based on facial expressions and working with various hardware components. The initial challenges posed by these tasks required substantial time for debugging and strategic thinking. Yet, after investing dedicated hours in problem-solving, we successfully incorporated these functionalities for Furby. The journey from initial unfamiliarity to practical application not only left us with a profound sense of accomplishment but also significantly elevated the quality of our final product. ## What we learned Among the many lessons learned, machine learning stood out prominently as it was still a relatively new concept for us! ## What's next for FurMe The future goals for FurMe include seamless integration with Google Calendar for efficient schedule management, a comprehensive daily overview feature, and productivity tools such as phone detection and a Pomodoro timer to assist users in maximizing their focus and workflow.
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
## Inspiration The 2016 presidential election was marred by the apparent involvement of Russian hackers who built bots that spewed charged political discourse and made it their goal to create havoc on internet forums. Hacking of this kind will never be completely stopped, but if you can't beat them... why not join them? The Russian IRA shouldn't be the only one making politically motivated bots! This bot informs voters of their congresspeople's contact information so they can call their representative to lobby for issues they are passionate about. ## What it does Contactyourrep is a reddit bot that uses sentiment analysis and location information to comment congresspeople's contact information. If a comment contains a key word (i.e. city, or congressperson's name), and it registers a low sentiment score, the bot will comment the relevant contact information of the relative U.S. Representative's or Senator's constituent hotline. Try it out on r/contactyourrep ## How I built it The bot was created using the PRAW API, which allowed an easier connection with reddit's interface. I then integrated Phone2action's legislature look API which can provide location specific legislature information based on address. u/contactyourrep is triggered by the presence of a state or city keyword and politically charged words in the comment's body. ## Challenges I ran into Finding specific key words is not as easy as you would think. Also one occasion the bot would run a little wild during testing causing the PRAW API to disallow posting for periods of time. This meant it was harder to debug and I often had to edit large blocks of code at once. ## Accomplishments that I'm proud of This is my first reddit bot! It was a very fun project and now that I've been through the whole process I'm excited about the possibility of additional project's in various reddit communities. ## What I learned It is surprisingly easy to make a bot that can post relative content on social media. This has powerful implications for the integrity of our political discourse online. If I can make a bot that can inform angry voters in under 36 hours, imagine what a country's dedicated hacking force can accomplish (I'm looking at your Russia). Though I'm hoping to use this bot to inform people of contact information regardless of political creed, it would be simple to reconfigure this bot to only comment based on pro-democrat/republican content. ## What's next for Contact Your Rep After demoing at treehacks, I plan to keep u/contactyourrep active on r/politics, hopefully increasing political involvement.
winning
## Inspiration Ever have those household chores you don't like, or just don't know how to do? So do we! ("Yes, mom, I'll take out the trash when I'm home) So we created this app for you to post jobs locally and allow people to browse through current job postings. Just include your contact information, and you can have someone watering your begonias to make some extra cash in no time! ## What it does Yes, mom! allows you to post a job on the android app (with a small description, estimated amount of time, compensation, and your contact information). We provide a simple, scrolling interface to browse other postings in your city. By selecting a "Job", you and others can view more details about the job and find contact information if you're interested in doing it. ## How I built it We used Android Studio to create Yes, mom! It's our first time using and creating an android app. For DB, we needed a server, so we used parse.com since it was free and easier to use than AWS. We created a couple of extra JavaScript methods to use for parse and the app to communicate, too. ## Challenges I ran into The largest challenge we ran into with this project was properly setting up a server and linking it to our app. Additionally, because we all had little or no experience using Android Studio, the project required lots of documentation reading and bug fixing! ## Accomplishments that I'm proud of As a team, we're really proud of how we were all able to work together to create this app together and make something totally new. Each of us got to have experience with coding and/or languages that we've never had before! ## What I learned Lots of coding experiences in Android app development/JavaScript for server ## What's next for Yes, Mom! In the future, we could extend Yes, Mom! by building as iOS app as well. Additionally, we could create "user accounts" so people are able to view all of their current job postings in one place.
## Inspiration With an increasing amount of homeless people on the street there are more and more people that need food water and shelter. Our hope was to create something that helps Organizations and Charities find and help homeless people more efficiently. ## What it does The user when they spot a homeless person they simply press the button and enter a brief description of the person. It then makes a mark at that location which an organization can use to more efficiently locate and help. ## How I built it We used Java in Android Studio utilizing GoogleMaps API and FireBase ## Challenges I ran into We have never used the GoogleMaps API before or FireBase so we had some problems configuring that and it took a while to learn how to read data from FireBase ## Accomplishments that I'm proud of For never using either FireBase or GoogleMaps I'm proud of our ability to learn and debug and solve problems as they arose ## What I learned How to integrate GoogleMaps and FireBase in Android Studio ## What's next for Homely -Have the general user and organization app for better use -Have the user alerted if the person they marked was helped -integrate GoogleDirections so that you can select a marker and be led there
## Inspiration Only a small percentage of Americans use ASL as their main form of daily communication. Hence, no one notices when ASL-first speakers are left out of using FaceTime, Zoom, or even iMessage voice memos. This is a terrible inconvenience for ASL-first speakers attempting to communicate with their loved ones, colleagues, and friends. There is a clear barrier to communication between those who are deaf or hard of hearing and those who are fully-abled. We created Hello as a solution to this problem for those experiencing similar situations and to lay the ground work for future seamless communication. On a personal level, Brandon's grandma is hard of hearing, which makes it very difficult to communicate. In the future this tool may be their only chance at clear communication. ## What it does Expectedly, there are two sides to the video call: a fully-abled person and a deaf or hard of hearing person. For the fully-abled person: * Their speech gets automatically transcribed in real-time and displayed to the end user * Their facial expressions and speech get analyzed for sentiment detection For the deaf/hard of hearing person: * Their hand signs are detected and translated into English in real-time * The translations are then cleaned up by an LLM and displayed to the end user in text and audio * Their facial expressions are analyzed for emotion detection ## How we built it Our frontend is a simple React and Vite project. On the backend, websockets are used for real-time inferencing. For the fully-abled person, their speech is first transcribed via Deepgram, then their emotion is detected using HumeAI. For the deaf/hard of hearing person, their hand signs are first translated using a custom ML model powered via Hyperbolic, then these translations are cleaned using both Google Gemini and Hyperbolic. Hume AI is used similarly on this end as well. Additionally, the translations are communicated back via text-to-speech using Cartesia/Deepgram. ## Challenges we ran into * Custom ML models are very hard to deploy (Credits to <https://github.com/hoyso48/Google---American-Sign-Language-Fingerspelling-Recognition-2nd-place-solution>) * Websockets are easier said than done * Spotty wifi ## Accomplishments that we're proud of * Learned websockets from scratch * Implemented custom ML model inferencing and workflows * More experience in systems design ## What's next for Hello Faster, more accurate ASL model. More scalability and maintainability for the codebase.
losing
## Inspiration Traffic lights are one of the most dangerous parts of driving. According to the AAA, in 2017, 939 people were killed in red light running crashes. Red light cameras don’t solve the problem because they are reactive and not proactive. Many red light runners don’t even realize they’ve run a red light, and once a collision happens, it doesn’t matter if there was a camera or not. So, we thought, if Google maps can give data on bad traffic so well, there’s no reason we can’t use similar data to prevent accidents. Taking inspiration from the Traffic Collision Avoidance System (TCAS) on airplanes, we wanted to use the same concept of predicting the course of the two vehicles and giving instructions to each if they’re on a collision course. ## What it does Our idea uses the live location, bearing, and speed, as well as the change of such data over time, for the current vehicle and all nearby vehicles, to predict if a car will run a red light and if that car has a chance of colliding with you. If this occurs, both drivers will be given an alert to stop, such that at least one of them is able to, avoiding the collision. ## How we built it We used HyperTrack’s API to get the user's geoJSON location which includes all the information we need. We also used Flutter and Dart to build the mobile user interface. ## Challenges we ran into Testing out our app was challenging since the live location data within the building was not accurate. There is also a slight delay for the GPS coordinates to be updated and for the warning to be alerted. Working with cross-platform mobile development was also challenging. ## Accomplishments that we're proud of Successful implementation of the API. This includes tracking multiple devices’ live location simultaneously; the working Flutter application. ## What we learned How to set up a Flutter application and how to make a mobile app. ## What's next for Collision Detector Make CollisionSpect hands-free for it to be used with voice commands.
## Inspiration *In 2018, about 35.7 percent of deaths due to road accidents happened on highways* Imagine, your loved ones facing an accident at midnight. What if there is no one to help them? What if they are in pain and want someone to rescue them? ### Our Aim We aim to create a safety device that allows people met with accidents to get immediate help and support. We do this by using **IoT, Machine Learning, and App development with cloud storage** facilities. #### How are we unique? There are a lot of **SOS apps** that exist but most of them require some form of human intervention. This is not possible when a vehicle has undergone an accident as the person will not be in any position to draft an SOS message. To overcome this problem we came up with a solution to completely **automate** this process and contact the **nearby hospitals and also alert close relatives and friends**. ## How we built it #### Tech stack used: * **ESP8266** wifi module, Accelerometer * **Deep learning** model, Tensorflow, Keras, OpenCV * **Flutter** app development * **Figma** app design * **Firebase** real-time cloud storage #### App explained: * We have used **Flutter frontend and Firebase** as our backend, with real-time cloud storage * In our app, the user can log in using the conventional mail method, or by using google * Our app has various features, namely: **Location tracking, weather report, temperature monitor, decentralized locking** * In our app, we have added the facility for the user to provide a **contact** of their *Friends* and recommended *hospital* for SOS * When our *hardware device* identifies an accident, a trigger is caused in the *cloud FireBase*. This will trigger the app and send an SOS message, along with sensor data and location #### Hardware/ Internet of things: * We use **ESP8266 wifi module** to connect the sensors to **Cloud firebase** by means of the internet * When the **MPU6050** identifies a jerk due to an accident, the firebase cloud storage gets updated in real-time. This is later used by the mobile which acts as an endpoint. #### Machine learning: * We used Python as our programming language, with googles **TensorFlow framework** to prepare a **machine learning** model that can **detect the blood** on the driver's face and determine the severity of the accident * We have deployed our model using *Flask framework* as a **web API** and have hosted it using **Heroku** and tested the **API with POSTMAN** * We send a *JSON POST* request to the API with an image URL and get back a JSON response for the same * Post request: { "URL": "Input the image URL here" } * API returns: { "Blood detected": 0 if No blood is detected, 1 if Blood is detected } * The neural network first identifies the face and then tries to identify blood on the face ## What it does Once the accident is confirmed with the Blood Detection API a Firebase Cloud Function is triggered which creates a payload of messages along with the coordinates of the accident and sends it to the list of hospitals and relatives that the user has added to the Flutter application. Since not all the close relatives will have the Flutter application installed, when the accident is detected they will also be sent an SMS with the help of an SMS API developed using Node.JS with the Twilio SMS library. ## Challenges we ran into * Integrating the wifi module with firebase was problematic because of an update in the firebase cloud service rules and plugin updates * Implementing the design in our application ## Accomplishments that we're proud of * Integrating IoT, Machine learning, and app development together in one project was something that we are very proud ## What we learned * App designing * flutter app integration with hardware ## What's next for DriveGuard mobile application * We would like to make it completely hardware-oriented without the involvement of application
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
losing
## Inspiration Documenting and analyzing a crime scene is very tedious and difficult task. There are many things that hinder a crime scene investigator to properly do their job. First off, photographs are a common method to document pieces of evidence in a crime. However, often times, disjointed pieces of imagery do not give the investigators the full picture. There is a possibility where a few photos were taking at the crime scene and they require closeups of pieces of evidence that are no longer available to them. ## What it does Detecto Mode is a mobile Augmented Reality (AR) crime scene annotation tool that allows investigators to spatially map out crime scenes and document pieces of evidence. This tool allows a crime scene investigator to: * Spatially map the environment in real time, using AR * Collaborate with other crime scene investigators to place notes, highlight important pieces of evidence in AR. * Use collected data points from notes and spatial mapping be sent to the cloud, to be processed at the police station. ## How we built it * ARCore * Google Cloud API * C# * Unity Engine ## Challenges we ran into During this hackathon, we were using a lot of technology such as spatial mapping (photogrammetry) and networking. There were are a lot of problems when it came to setting up these technologies to work into Unity. ## Accomplishments that we're proud of We were able to successfully combine two technologies that we as a team were completely unfamiliar with. In addition, we also made a polished user interface for the final product. ## What we learned An important lesson we took away from this hackathon is to spend time understanding and quantifying a problem. Doing proper research will help inform design decisions. In addition, we also learned that time management is a key part of being able to complete a project on time. As a team, we tried to track progress and set milestones during development of the software we were making. ## What's next for Detecto Mode We would explore the possibility of using technology such as Computer Vision/Machine Learning to have the software auto tag points of evidence. In addition, we would want to create a backend system that would parse the data collected by the crime scene investigators and create useful graphs and visualizations.
## Inspiration Crime rates are on the rise across America, and many people, especially women, fear walking alone at night, even in their own neighborhoods. When we first came to MIT, we all experienced some level of harassment on the streets. Current navigation apps do not include the necessary safety precautions that pedestrians need to identify and avoid dimly-lit, high-crime areas. ## What it does Using a combination of police crime reports and the Mapbox API, VIA offers users multiple secure paths to their destination and a user-friendly display of crime reports within the past few months. Ultimately, VIA is an app that provides up-to-date data about the safety of pedestrian routes. ## How we built it We built our interactive map with the Mapbox API, programming functions with HTML and Javascript which overlays Boston police department crime data on the map and generates multiple routes given start and end destinations. ## Challenges we ran into We had some difficulty with instruction banners at the end of the hackathon that we will definitely work on in the future. ## Accomplishments that we're proud of None of us had much experience with frontend programming or working with APIs, and a lot of the process was trial and error. Creating the visuals for the maps in such a short period of time pushed us to step out of our comfort zones. We'd been ideating this project for quite some time, so actually creating an MVP is something we are very proud of. ## What we learned This project was the first time that any of us actually built tangible applications outside of school, so coding this in 24-hours was a great learning experience. We learned about working with APIs and how to read the documentation involved in using them as well as breaking down data files into workable data structures. With all of us having busy schedules this weekend, it was also important to communicate properly so that we each new what our tasks were for the day as we weren't all together for a majority of the hackathon. However, we were all able to collaborate well, and we learned how to communicate effectively and work together to overcome our project challenges. ## What's next for VIA We plan on working outside of school on this project to hone some of the designs and make the navigation features with the data available beyond Boston. There are many areas that we can improve the design, such as making the application a mobile app instead of a web app, which we will consider working on in the future.
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
losing
## Inspiration On our way to PennApps, our team was hungrily waiting in line at Shake Shack while trying to think of the best hack idea to bring. Unfortunately, rather than being able to sit comfortably and pull out our laptops to research, we were forced to stand in a long line to reach the cashier, only to be handed a clunky buzzer that countless other greasy fingered customer had laid their hands on. We decided that there has to be a better way; a way to simply walk into a restaurant, be able to spend more time with friends, and to stand in line as little as possible. So we made it. ## What it does Q'd (pronounced queued) digitizes the process of waiting in line by allowing restaurants and events to host a line through the mobile app and for users to line up digitally through their phones as well. Also, it functions to give users a sense of the different opportunities around them by searching for nearby queues. Once in a queue, the user "takes a ticket" which decrements until they are the first person in line. In the meantime, they are free to do whatever they want and not be limited to the 2-D pathway of a line for the next minutes (or even hours). When the user is soon to be the first in line, they are sent a push notification and requested to appear at the entrance where the host of the queue can check them off, let them in, and remove them from the queue. In addition to removing the hassle of line waiting, hosts of queues can access their Q'd Analytics to learn how many people were in their queues at what times and learn key metrics about the visitors to their queues. ## How we built it Q'd comes in three parts; the native mobile app, the web app client, and the Hasura server. 1. The mobile iOS application built with Apache Cordova in order to allow the native iOS app to be built in pure HTML and Javascript. This framework allows the application to work on both Android, iOS, and web applications as well as to be incredibly responsive. 2. The web application is built with good ol' HTML, CSS, and JavaScript. Using the Materialize CSS framework gives the application a professional feel as well as resources such as AmChart that provide the user a clear understanding of their queue metrics. 3. Our beast of a server was constructed with the Hasura application which allowed us to build our own data structure as well as to use the API calls for the data across all of our platforms. Therefore, every method dealing with queues or queue analytics deals with our Hasura server through API calls and database use. ## Challenges we ran into A key challenge we discovered was the implementation of Cordova and its associated plugins. Having been primarily Android developers, the native environment of the iOS application challenged our skills and provided us a lot of learn before we were ready to properly implement it. Next, although a less challenge, the Hasura application had a learning curve before we were able to really us it successfully. Particularly, we had issues with relationships between different objects within the database. Nevertheless, we persevered and were able to get it working really well which allowed for an easier time building the front end. ## Accomplishments that we're proud of Overall, we're extremely proud of coming in with little knowledge about Cordova, iOS development, and only learning about Hasura at the hackathon, then being able to develop a fully responsive app using all of these technologies relatively well. While we considered making what we are comfortable with (particularly web apps), we wanted to push our limits to take the challenge to learn about mobile development and cloud databases. Another accomplishment we're proud of is making it through our first hackathon longer than 24 hours :) ## What we learned During our time developing Q'd, we were exposed to and became proficient in various technologies ranging from Cordova to Hasura. However, besides technology, we learned important lessons about taking the time to properly flesh out our ideas before jumping in headfirst. We devoted the first two hours of the hackathon to really understand what we wanted to accomplish with Q'd, so in the end, we can be truly satisfied with what we have been able to create. ## What's next for Q'd In the future, we're looking towards enabling hosts of queues to include premium options for users to take advantage to skip lines of be part of more exclusive lines. Furthermore, we want to expand the data analytics that the hosts can take advantage of in order to improve their own revenue and to make a better experience for their visitors and customers.
## Inspiration Ricky and I are big fans of the software culture. It's very open and free, much like the ideals of our great nation. As U.S. military veterans, we are drawn to software that liberates the oppressed and gives a voice to those unheard. **Senate Joint Resolution 34** is awaiting ratification from the President, and if this happens, internet traffic will become a commodity. This means that Internet Service Providers (ISPs) will have the capability of using their users' browsing data for financial gain. This is a clear infringement on user privacy and is diametrically opposed to the idea of an open-internet. As such, we decided to build **chaos**, which gives a voice... many voices to the user. We feel that it's hard to listen in on a conversation in a noisy room. ## What it does Chaos hides browsing patterns. Chaos leverages **chaos.js**, a custom headless browser we built on top of PhantomJS and QT, to scramble incoming/outgoing requests that distorts browsing data beyond use. Further, Chaos leverages its proxy network to supply users with highly-reliable and secure HTTPS proxies on their system. By using our own custom browser, we are able to dispatch a lightweight headless browser that mimics human-computer interaction, making its behavior indistinguishable from our user's behavior. There are two modes: **chaos** and **frenzy**. The first mode scrambles requests at an average of 50 sites per minute. The second mode scrambles requests at an average of 300 sites per minute, and stops at 9000 sites. We use a dynamically-updating list of over **26,000** approved sites in order to ensure diverse and organic browsing patterns. ## How we built it ### Development of the chaos is broken down into **3** layers we had to build * OS X Client * Headless browser engine (chaos.js) * Chaos VPN/Proxy Layer ### Layer 1: OS X Client --- ![](https://res.cloudinary.com/devpost/image/fetch/s--surFkHR6--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://www.burnamtech.com/chaosViews.png) The Chaos OS X Client scrambles outgoing internet traffic. This crowds IP data collection and hides browsing habits beneath layers of organic, randomized traffic. ###### OS X Client implementation * Chaos OS X is a light-weight Swift menubar application * Chaos OS X is built on top of **chaos.js**, a custom WebKit-driven headless-browser that revolutionizes the way that code interacts with the internet. chaos.js allows for outgoing traffic to appear **completely organic** to any external observer. * Chaos OS X scrambles traffic and provides high-quality proxies. This is a result of our development of **chaos.js** headless browser and the **Chaos VPN/Proxy layer**. * Chaos OS X has two primary modes: + **chaos**: Scrambles traffic on average of 50 sites per minute. + **frenzy**: Scrambles traffic on average of 500 sites per minute, stops at 9000 sites. ### Layer 2: Headless browser engine (chaos.js) --- Chaos is built on top of the chaos.js engine that we've built, a new approach to WebKit-driven headless browsing. Chaos is **completely** indiscernible from a human user. All traffic coming from Chaos will appear as if it is actually coming from a human-user. This was, by far, the most technically challenging aspect of this hack. Here are a few of the changes we made: ##### #Step 1: Modify header ordering in the QTNetwork layer ##### Chrome headers ![](https://res.cloudinary.com/devpost/image/fetch/s--c5WyccU---/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://www.burnamtech.com/chromeHeaders.png) ##### PhantomJS headers ![](https://res.cloudinary.com/devpost/image/fetch/s--tSLNCBdo--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://www.burnamtech.com/phantomHeaders.png) The header order between other **WebKit** browsers come in static ordering. PhantomJS accesses **WebKit** through the **Qt networking layer**. ``` Modified: qhttpnetworkrequest.cpp ``` --- ###### Step 2: Hide exposed footprints ``` Modified: examples/pagecallback.js src/ghostdriver/request_handlers/session_request_handler.js src/webpage.cpp test/lib/www/* ``` --- ###### Step 3: Client API implementation * User agent randomization * Pseudo-random bezier mouse path generation * Speed trap reactive DOM interactions * Dynamic view-port * Other changes... ### Layer 3: Chaos VPN/Proxy Layer --- The Chaos VPN back-end is made up of **two cloud systems** hosted on Linode: an OpenVPN and a server. The server deploys an Ubuntu 16.10 distro, which functions as a dynamic proxy-tester that continuously parses the Chaos Proxies to ensure performance and security standards. It then automatically removes inadequate proxies and replaces them with new ones, as well as maintaining a minimum number of proxies necessary. This ensures the Chaos Proxy database is only populated with efficient nodes. The purpose of the OpenVPN layer is to route https traffic from the host through our VPN encryption layer and then through one of the proxies mentioned above, and finally to the destination. The VPN serves as a very safe and ethical layer that adds extra privacy for https traffic. This way, the ISP only sees traffic from the host to the VPN. Not from the VPN to the proxy, from the proxy to the destination, and all the way back. There is no connection between host and destination. Moving forward we will implement further ways of checking and gathering safe proxies. Moreover, we've begun development on a machine learning layer which will run on the server. This will help determine which sites to scramble internet history with based on general site sentiment. This will be acomplished by running natural-language processing, sentiment analysis, and entity analytics on the sites. ## Challenges we ran into This project was **huge**. As we peeled back layer after layer, we realized that the tools we needed simply didn't exist or weren't adequate. This required us to spend a lot of time in several different programming languages/environments in order to build the diverse elements of the platform. We also had a few blocks in terms of architecture cohesion. We wrote the platform in 6 different languages in 5 different environments, and all of the pieces had to work together *exceedingly well*. We spent a lot of time at the data layer of the respective modules, and it slowed us down considerably at times. ![](https://res.cloudinary.com/devpost/image/fetch/s--C6b56a0j--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://www.burnamtech.com/boards.png) ## Accomplishments that we're proud of * We began by contributing to the open-source project **pak**, which allowed us to build complex build-scripts with ease. This was an early decision that helped us tremendously when dealing with `netstat`, network diagnostics and complex python/node scrape scripts. * We're most proud of the work we did with **chaos.js**. We found that **every** headless browser that is publicly available is easily detectable. We tried PhantomJS, Selenium, Nightmare, and Casper (just to name a few), and we could expose many of them in a matter of minutes. As such, we set out to build our own layer on top of PhantomJS in order to create the first, truly undetectable headless browser. * This was massively complex, with programming done in C++ and Javascript and nested Makefile dependencies, we found ourselves facing a giant. However, we could not afford for ISPs to be able to distinguish a pattern in the browsing data, so this technology really sits at the core of our system, alongside some other cool elements. ## What we learned In terms of code, we learned a ton about HTTP/HTTPS and the TCP/IP protocols. We also learned first how to detect "bot" traffic on a webpage and then how to manipulate WebKit behavior to expose key behaviors that mask the code behind the IP. Neither of us had ever used Linode, and standing up two instances (a proper server and a VPN server) was an interesting experience. Fitting all of the parts together was really cool and exposed us to technology stacks on the front-end, back-end, and system level. ## What's next for chaos More code! We're planning on deploying this as an open-source solution, which most immediately requires a build script to handle the many disparate elements of the system. Further, we plan on continued research into the deep layers of web interaction in order to find other ways of preserving anonymity and the essence of the internet for all users!
## Inspiration Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant. ## How we built it We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes. ## Challenges we ran into We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls. ## Accomplishments that we're proud of Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision. ## What we learned During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork. ## What's next for DishPlay We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food!
winning
## Inspiration What happens if a voice recognition engine is integrated into a website? ## What it does Talk with it. It will give you back intelligent results. ## How I built it Knockout.js as front-end JavaScript view library with Houndify voice API integrated. Express.js on Node.js as web server. Bootstrap for styling. Deployed on Heroku. ## Challenges I ran into Pay attention to closing your HTML tags! ## Accomplishments that I'm proud of It works well! ## What I learned Walked over the entire process of full-stack web development. ## What's next for OverVoice Do more, go higher.
## Inspiration Our inspiration was Saqib Shaikh, a software engineer at Microsoft who also happens to be hard of sight. In order to write code, he used a text-to-speech engine that would read back what he wrote. That got us thinking: how do other visually impaired individuals learn how to code and develop software? What if there were not only text-to-speech plugins, but also speech-to-text plugins? That became the basis of our project, Shaikh. ## What it does Shaikh takes in a speech input, either through your device's built-in microphone or an Amazon Echo. Sample inputs include: "declare a variable named x and initialize it to 0," or "declare a for loop that iterates from 1 to 50." Alexa will then translate these requests into the appropriate syntax (in this case, Python), and send this to the server. From the server, we make a GET request from our extension in VS Code and insert this code snippet into our text editor. ## How I built it We used the Amazon Echo to take in speech input. Additionally, we used Google Cloud services to host our server and StdLib in order to connect the endpoints of our HTTP requests. Our backend was mostly done in Javascript (Node.js, typed.js, AJAX, jQuery, as well as Typescript). We used HTML/CSS/Javascript for our website demo. ## Challenges I ran into Since there were so many components to our project, we had issues integrating the backend portion together cohesively. In addition, we had some difficulties with the Amazon Echo, since it was our first time using it. ## Accomplishments that I'm proud of We built something super cool! This was our first time working with Alexa and we're excited to integrate her technology into future hacks. ## What I learned How to use Alexa, the difference between a JAR and a runnable JAR file was, language models, integrating all the backend together, Kern's juice, how long we could stay awake without sleeping. ## What's next for Shaikh Using Stack Overflow's API to catch errors and suggest/implement changes, expand to other code editors,
## Inspiration We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem. ## What it does The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality. The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material. ## How we built it We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student. ## Challenges we ran into One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site. ## Accomplishments that we’re proud of Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the ## What we learned We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python. ## What's next for Gradian The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
losing
## Inspiration Our project was inspired by geocaching. Geocaching is the idea of crowdsourcing scavenger hunts where users can hide caches throughout a city and other users can go on adventures to discover them. Team: Dee Lucic Matt Mitchell Keshav Chawla Alexei dela Pena For this project we wanted to use the follow concepts: 1. The idea of scavenger hunts / finding things in the environment 2. Staying active and "taking care of yourself" to support Wealthsimple's motto 3. To go on an adventure, and to "push your limits" with Mountain Dew's motto 4. To use Indico image recognition API to recognize these scavenger hunt objects 5. Use Twitter to share photos of scavenger hunt objects ## What it does We have developed a marketing tool that is a Pepsi/Mountain Dew app that gives the users a list of tasks (scavenger hunt items) that the user needs to find in the environment. The tasks satisfy an active lifestyle like taking a picture of a ping pong ball. When the user finds the item, they take a picture of it and the app will verify if the item is correct through image recognition. If enough items are found, then the app will unlock a DewTheDew fridge, which will vend free Mountain Dew to the lucky winner. All photos are posted on Twitter under the hastag: DoForDew ## How I built it Our app is an iOS app built in Swift. It communicates with our local PHP webservice that is hosted on a laptop. The iOS app also tweets our photos. The PHP webservice calls python scripts that call the Indico image recognition API. The webservice also sends serial commands on the USB port to the fridge which contains an arduino. The arduino is programmed to process serial commands and control an analog servo motor that locks and unlocks the door of the fridge. ## Challenges I ran into We initially tried to communicate directly with the arduino from the iOS app using bluetooth but the arduino bluetooth libraries, and the iOS bluetooth libraries are complicated, and we would not have enough time to implement the code in time. ## Accomplishments that I'm proud of The app developer is new to iOS and Swift and we are happy with what we have created so far. The webservice is very powerful and it can handle different expansions if required. To make the fridge we had to go to a local hardware store and find the parts to create a working model in time. The challenge was difficult but we are happy with the progress so far. ## What I learned All the developers on our team are inexperienced and we are happy that we were able to integrate all these in just a day's worth of development. We learned how to make UIs in iOS, how to pass information over PHP, Serial communications, and REST APIs. ## What's next for DoTheDew HackTheFridge The next thing to do is to integrate bluetooth into the iOS app and set up a bluetooth shield on the arduino. Also, the Inico image recognition APIs can be called from the iOS app too. Once these two changes are made, then we will no longer require the webservice server and we can get rid of the laptop. That's the first step to creating a standalone product. After that we'll be able to create a real fridge.
## Inspiration Save the World is a mobile app meant to promote sustainable practices, one task at a time. ## What it does Users begin with a colorless Earth prominently displayed on their screens, along with a list of possible tasks. After completing a sustainable task, such as saying no to a straw at a restaurant, users obtain points towards their goal of saving this empty world. As points are earned and users level up, they receive lively stickers to add to their world. Suggestions for activities are given based on the time of day. They can also connect with their friends to compete for the best scores and sustainability. Both the fun stickers and friendly competition encourage heightened sustainability practices from all users! ## How I built it Our team created an iOS app with Swift. For the backend of tasks and users, we utilized a Firebase database. To connect these two, we utilized CocoaPods. ## Challenges I ran into Half of our team had not used iOS before this Hackathon. We worked together to get past this learning curve and all contribute to the app. Additionally, we created a setup in Xcode for the wrong type of database at first. At that point, we made a decision to change the Xcode setup instead of creating a different database. Finally, we found that it is difficult to use CocoaPods in conjunction with Github, because every computer needs to do the pod init anyway. We carefully worked through this issue along with several other merge conflicts. ## Accomplishments that I'm proud of We are proud of our ability to work as a team even with the majority of our members having limited Xcode experience. We are also excited that we delivered a functional app with almost all of the features we had hoped to complete. We had some other project ideas at the beginning but decided they did not have a high enough challenge factor; the ambition worked out and we are excited about what we produced. ## What I learned We learned that it is important to triage which tasks should be attempted first. We attempted to prioritize the most important app functions and leave some of the fun features for the end. It was often tempting to try to work on exciting UI or other finishing touches, but having a strong project foundation was important. We also learned to continue to work hard even when the due date seemed far away. The first several hours were just as important as the final minutes of development. ## What's next for Save the World Save the World has some wonderful features that could be implemented after this hackathon. For instance, the social aspect could be extended to give users more points if they meet up to do a task together. There could also be be forums for sustainability blog posts from users and chat areas. Additionally, the app could recommend personal tasks for users and start to “learn” their schedule and most-completed tasks.
## 💡 Inspiration 💯 Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you? ## ♻️ What it does 🌱 IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website. ## 🛠️ How we built it 💬 Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items. Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords. Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into. ## 🚧 Challenges we ran into ⛔ * Connecting the Arduino to the arms * Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees * Using global variables on our website * Configuring MongoDB to store user data * Figuring out how and when to detect the type of trash on the screen ## 🎉 Accomplishments that we're proud of 🏆 In a short span of 24 hours, we are proud to: * Successfully engineer and program a servo arm to sort trash into two separate bins * Connect and program LED lights that change colors varying on recyclable or non-recyclable trash * Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not * Develop an intuitive website with React.js that includes login, user profile, and informative capabilities * Drink a total of 9 cans of Monsters combined (the cans were recycled) ## 🧠 What we learned 🤓 * How to program in C++ * How to control servo arms at certain degrees with an Arduino * How to parse and understand Google Cloud Vision API outputs * How to connect a MongoDB database to create user authentification * How to use global state variables in Node.js and React.js * What types of items are recyclable ## 🌳 Importance of Recycling 🍀 * Conserves natural resources by reusing materials * Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions * Reduces the amount of waste sent to landfills, * Decreasesdisruption to ecosystems and habitats ## 👍How Intellibin helps 👌 **Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams. **Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates. **User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts. **In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology. ## 🔮 What's next for Intellibin⏭️ The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics!
partial
## Not All Backs are Packed: An Origin Story (Inspiration) A backpack is an extremely simple, and yet ubiquitous item. We want to take the backpack into the future without sacrificing the simplicity and functionality. ## The Got Your Back, Pack: **U N P A C K E D** (What's it made of) GPS Location services, 9000 mAH power battery, Solar charging, USB connectivity, Keypad security lock, Customizable RBG Led, Android/iOS Application integration, ## From Backed Up to Back Pack (How we built it) ## The Empire Strikes **Back**(packs) (Challenges we ran into) We ran into challenges with getting wood to laser cut and bend properly. We found a unique pattern that allowed us to keep our 1/8" wood durable when needed and flexible when not. Also, making connection of hardware and app with the API was tricky. ## Something to Write **Back** Home To (Accomplishments that we're proud of) ## Packing for Next Time (Lessons Learned) ## To **Pack**-finity, and Beyond! (What's next for "Got Your Back, Pack!") The next step would be revising the design to be more ergonomic for the user: the back pack is a very clunky and easy to make shape with little curves to hug the user when put on. This, along with streamlining the circuitry and code, would be something to consider.
## Inspiration The opioid crisis is a widespread danger, affecting millions of Americans every year. In 2016 alone, 2.1 million people had an opioid use disorder, resulting in over 40,000 deaths. After researching what had been done to tackle this problem, we came upon many pill dispensers currently on the market. However, we failed to see how they addressed the core of the problem - most were simply reminder systems with no way to regulate the quantity of medication being taken, ineffective to prevent drug overdose. As for the secure solutions, they cost somewhere between $200 to $600, well out of most people’s price ranges. Thus, we set out to prototype our own secure, simple, affordable, and end-to-end pipeline to address this problem, developing a robust medication reminder and dispensing system that not only makes it easy to follow the doctor’s orders, but also difficult to disobey. ## What it does This product has three components: the web app, the mobile app, and the physical device. The web end is built for doctors to register patients, easily schedule dates and timing for their medications, and specify the medication name and dosage. Any changes the doctor makes are automatically synced with the patient’s mobile app. Through the app, patients can view their prescriptions and contact their doctor with the touch of one button, and they are instantly notified when they are due for prescriptions. Once they click on on an unlocked medication, the app communicates with LocPill to dispense the precise dosage. LocPill uses a system of gears and motors to do so, and it remains locked to prevent the patient from attempting to open the box to gain access to more medication than in the dosage; however, doctors and pharmacists will be able to open the box. ## How we built it The LocPill prototype was designed on Rhino and 3-D printed. Each of the gears in the system was laser cut, and the gears were connected to a servo that was controlled by an Adafruit Bluefruit BLE Arduino programmed in C. The web end was coded in HTML, CSS, Javascript, and PHP. The iOS app was coded in Swift using Xcode with mainly the UIKit framework with the help of the LBTA cocoa pod. Both front ends were supported using a Firebase backend database and email:password authentication. ## Challenges we ran into Nothing is gained without a challenge; many of the skills this project required were things we had little to no experience with. From the modeling in the RP lab to the back end communication between our website and app, everything was a new challenge with a lesson to be gained from. During the final hours of the last day, while assembling our final product, we mistakenly positioned a gear in the incorrect area. Unfortunately, by the time we realized this, the super glue holding the gear in place had dried. Hence began our 4am trip to Fresh Grocer Sunday morning to acquire acetone, an active ingredient in nail polish remover. Although we returned drenched and shivering after running back in shorts and flip-flops during a storm, the satisfaction we felt upon seeing our final project correctly assembled was unmatched. ## Accomplishments that we're proud of Our team is most proud of successfully creating and prototyping an object with the potential for positive social impact. Within a very short time, we accomplished much of our ambitious goal: to build a project that spanned 4 platforms over the course of two days: two front ends (mobile and web), a backend, and a physical mechanism. In terms of just codebase, the iOS app has over 2600 lines of code, and in total, we assembled around 5k lines of code. We completed and printed a prototype of our design and tested it with actual motors, confirming that our design’s specs were accurate as per the initial model. ## What we learned Working on LocPill at PennApps gave us a unique chance to learn by doing. Laser cutting, Solidworks Design, 3D printing, setting up Arduino/iOS bluetooth connections, Arduino coding, database matching between front ends: these are just the tip of the iceberg in terms of the skills we picked up during the last 36 hours by diving into challenges rather than relying on a textbook or being formally taught concepts. While the skills we picked up were extremely valuable, our ultimate takeaway from this project is the confidence that we could pave the path in front of us even if we couldn’t always see the light ahead. ## What's next for LocPill While we built a successful prototype during PennApps, we hope to formalize our design further before taking the idea to the Rothberg Catalyzer in October, where we plan to launch this product. During the first half of 2019, we plan to submit this product at more entrepreneurship competitions and reach out to healthcare organizations. During the second half of 2019, we plan to raise VC funding and acquire our first deals with healthcare providers. In short, this idea only begins at PennApps; it has a long future ahead of it.
## Inspiration Nikola Tesla, Green energy advancements, fighting climate change, promoting sustainability, NASA x BYU collab (complient mechanism) ## What it does Tracks the sun to make the solar panels more efficient ## How we built it Using Arduino IDE with an Uno, photoresistors to send signals to the Arduino, recycled materials from other hackers and students at the University of Ottawa's Makerspace and various cables and resistors from other projects. ## Challenges we ran into We originally wanted to use a bending machine (complient mechanism) for the movement, but the print did not turn out well (support too hard to remove without damaging the needed structure) and we had a very strict timeline. We switched to a simpler axel design made from a chopstick that would have been otherwise thrown out. ## Accomplishments that we're proud of Using reused and recycled materials almost exclusively and creating a device that actually works and is complete. (and our code working basically first try ) ## What we learned We gained lots of experience with Arduino, soldering, using photoresistors, using solar panels as a power source and working as a team under pressure. ## What's next for Recycle Everything Under the Sun Working on more sophisticated solar or green energy projects in future hackathons or other engineering competitions or events. Expanding the range of motion from 180 degrees to half a sphere's.
winning
## Inspiration Tired of letting your food go to waste? Want some free food after a long day? You're at the right place ## What it does Introducing MunchMap, your very own virtual community fridge! Say goodbye to food waste and hunger with our innovative platform designed to transform your community into a hub of sharing and caring. ## How we built it React, Node, PostgreSQL, Tailwind ## Challenges we ran into * Tracking geographic location ## Accomplishments that we're proud of * Full stack, connected and completed app ## What's next for MunchMap * Adding scheduling for customizability
## Inspiration I was cooking at home one day and I kept noticing we had half a carrot, half an onion, and like a quarter of a pound of ground pork lying around all the time. More often than not it was from me cooking a fun dish that my mother have to somehow clean up over the week. So I wanted to create an app that would help me use those ingredients that I have neglected so that even if both my mother and I forget about it we would not contribute to food waste. ## What it does Our app uses a database to store our user's fridge and keeps track of the food in their fridge. When the user wants a food recipe recommendation our app will help our user finish off their food waste. Using the power of chatGPT our app is super flexable and all the unknown food and food that you are too lazy to measure the weight of can be quickly put into a flexible and delicious recipe. ## How we built it Using figma for design, react.JS bootstrap for frontend, flask backend, a mongoDB database, and openAI APIs we were able to create this stunning looking demo. ## Challenges we ran into We messed up our database schema and poor design choices in our APIs resulting in a complete refactor. Our group also ran into problems with react being that we were relearning it. OpenAI API gave us inconsistency problems too. We pushed past these challenges together by dropping our immediate work and thinking of a solution together. ## Accomplishments that we're proud of We finished our demo and it looks good. Our dev-ops practices were professional and efficient, our kan-ban board saved us a lot of time when planning and implementing tasks. We also wrote plenty of documentations where after our first bout of failure we planned out everything with our group. ## What we learned We learned the importance of good API design and planning to save headaches when implementing out our API endpoints. We also learned much about the nuance and intricacies when using CORS technology. Another interesting thing we learned is how to write detailed prompts to retrieve formatted data from LLMs. ## What's next for Food ResQ : AI Recommended Recipes To Reduce Food Waste We are planning to add a receipt scanning feature so that our users would not have to manually add in each ingredients into their fridge. We are also working on a feature where we would prioritize ingredients that are closer to expiry. Another feature we are looking at is notifications to remind our users that their ingredients should be used soon to drive up our engagement more. We are looking for payment processing vendors to allow our users to operate the most advanced LLMs at a slight premium for less than a coffee a month. ## Challenges, themes, prizes we are submitting for Sponsor Challenges: None Themes: Artificial Intelligence & Sustainability Prizes: Best AI Hack, Best Sustainability Hack, Best Use of MongoDB Atlas, Most Creative Use of Github, Top 3 Prize
Team channel #43 Team discord users - Sarim Zia #0673, Elly #2476, (ASK), rusticolus #4817, Names - Vamiq, Elly, Sarim, Shahbaaz ## Inspiration When brainstorming an idea, we concentrated on problems that affected a large population and that mattered to us. Topics such as homelessness, food waste and a clean environment came up while in discussion. FULLER was able to incorporate all our ideas and ended up being a multifaceted solution that was able to help support the community. ## What it does FULLER connects charities and shelters to local restaurants with uneaten food and unused groceries. As food prices begin to increase along with homelessness and unemployment we decided to create FULLER. Our website serves as a communication platform between both parties. A scheduled pick-up time is inputted by restaurants and charities are able to easily access a listing of restaurants with available food or groceries for contactless pick-up later in the week. ## How we built it We used React.js to create our website, coding in HTML, CSS, JS, mongodb, bcrypt, node.js, mern, express.js . We also used a backend database. ## Challenges we ran into A challenge that we ran into was communication of the how the code was organized. This led to setbacks as we had to fix up the code which sometimes required us to rewrite lines. ## Accomplishments that we're proud of We are proud that we were able to finish the website. Half our team had no prior experience with HTML, CSS or React, despite this, we were able to create a fair outline of our website. We are also proud that we were able to come up with a viable solution to help out our community that is potentially implementable. ## What we learned We learned that when collaborating on a project it is important to communicate, more specifically on how the code is organized. As previously mentioned we had trouble editing and running the code which caused major setbacks. In addition to this, two team members were able to learn HTML, CSS, and JS over the weekend. ## What's next for us We would want to create more pages in the website to have it fully functional as well as clean up the Front-end of our project. Moreover, we would also like to look into how to implement the project to help out those in need in our community.
losing
## Inspiration Garbage in bins around cities are constantly overflowing. Our goal was to create a system that better allocates time and resources to help prevent this problem, while also positively impacting the environment. ## What it does Urbins provides a live monitoring web application that displays the live capacity of both garbage and recycling compartments using ultrasonic sensors. This functionality can be seen inside the prototype garbage bin. The bin uses a cell phone camera to send an image to the custom learning model built with IBM Watson. The results from the Watson model is used to classify each object placed in the bin so that it can be sorted into either garbage or recycling. Based on the classification, the Android application controls the V-shaped platform using a servo motor to tilt the platform and drop the item into it's correct bin. Once a garbage/recycling bin nears full-capacity, STDlib is used to notify city workers via SMS that bins at a given address are full. Machine learning is applied when an object cannot be classified. When this happens, the image of the object is sent via STDlib to Slack. Along with the image, response buttons are displayed in Slack, which allows a city worker to manually classify the item. Once a selection is made, the new classification is used to further train the Watson model. This updated model is then used by all the connected smart garbage bins, allowing for all the bins to learn. ## Challenges we ran into Integrating all components Learning to use IBM Watson Providing the set of images for IBM Watson (Needed to be a zip file containing at least 10 photos to update the model) ## Accomplishments that we're proud of Integrating all the components. Getting IBM Watson working Getting STDlib working Training IBM Watson using STDLib ## What we learned How to use IBM Watson How to effectively plan a project Designing an effective architecture How to use STDlib ## What's next for Urbins Accounts Algorithm for optimal route for shift Dashboard with map areas, floor plans, housing plans, and event maps Heat map on google maps Bar chart of stats over past 6 months (which bin was the most frequently filled?) Product Information and Brand data
## Inspiration With caffeine being a staple in almost every student’s lifestyle, many are unaware when it comes to the amount of caffeine in their drinks. Although a small dose of caffeine increases one’s ability to concentrate, higher doses may be detrimental to physical and mental health. This inspired us to create The Perfect Blend, a platform that allows users to manage their daily caffeine intake, with the aim of preventing students from spiralling down a coffee addiction. ## What it does The Perfect Blend tracks caffeine intake and calculates how long it takes to leave the body, ensuring that users do not consume more than the daily recommended amount of caffeine. Users can add drinks from the given options and it will update on the tracker. Moreover, The Perfect Blend educates users on how the quantity of caffeine affects their bodies with verified data and informative tier lists. ## How we built it We used Figma to lay out the design of our website, then implemented it into Velo by Wix. The back-end of the website is coded using JavaScript. Our domain name was registered with domain.com. ## Challenges we ran into This was our team’s first hackathon, so we decided to use Velo by Wix as a way to speed up the website building process; however, Wix only allows one person to edit at a time. This significantly decreased the efficiency of developing a website. In addition, Wix has building blocks and set templates making it more difficult for customization. Our team had no previous experience with JavaScript, which made the process more challenging. ## Accomplishments that we're proud of This hackathon allowed us to ameliorate our web design abilities and further improve our coding skills. As first time hackers, we are extremely proud of our final product. We developed a functioning website from scratch in 36 hours! ## What we learned We learned how to lay out and use colours and shapes on Figma. This helped us a lot while designing our website. We discovered several convenient functionalities that Velo by Wix provides, which strengthened the final product. We learned how to customize the back-end development with a new coding language, JavaScript. ## What's next for The Perfect Blend Our team plans to add many more coffee types and caffeinated drinks, ranging from teas to energy drinks. We would also like to implement more features, such as saving the tracker progress to compare days and producing weekly charts.
## Inspiration ## What it does Alfredo is a tool to keep track of the inventory in one's fridge. Unlike many existing smart fridge that scans your receipts in order to register the recent items added to the fridge, it uses the speech recognition feature of a smart home assistant, e.g. Amazon Echo add and remove items. Additionally, it keeps tracks of the amount of time a certain product has been stored in the fridge. The inventory is accessible even when you are not at home making your trip at the groceries easier. Additionally, Alfredo's inventory can keep a record of your eating or food waste habits. ## How we built it 1. Setup the custom skill with Alexa's developer console using json to create Alfredo-intents 2. Pass intents to the Lambda function (written in Python) 3. Setup and manipulate the database with MySQL through the Lambda function (cry a bit) 4. Wrap everything up in a beautiful website platform for nutrition analytics 5. Learn about your eating habits. 6. PROFIT ## Challenges we ran into None of us knows about databases, but we are working on that. We also had very little experience in web development. ## Accomplishments that we are proud of Being able to stitch our parts together. ## What we learned That AWS is less straightforward than we thought ## What's next for Alfredo We would like to have a built-in camera that would keep track of the item entering and exiting the fridge. Using large database to get a good estimate of expiring date of products and have the home assistant give meal suggestion using the available food in the fridge.
winning
## Inspiration When it comes to finding solutions to global issues, we often feel helpless: making us feel as if our small impact will not help the bigger picture. Climate change is a critical concern of our age; however, the extent of this matter often reaches beyond what one person can do....or so we think! Inspired by the feeling of "not much we can do", we created *eatco*. *Eatco* allows the user to gain live updates and learn how their usage of the platform helps fight climate change. This allows us to not only present users with a medium to make an impact but also helps spread information about how mother nature can heal. ## What it does While *eatco* is centered around providing an eco-friendly alternative lifestyle, we narrowed our approach to something everyone loves and can apt to; food! Other than the plenty of health benefits of adopting a vegetarian diet — such as lowering cholesterol intake and protecting against cardiovas­cular diseases — having a meatless diet also allows you to reduce greenhouse gas emissions which contribute to 60% of our climate crisis. Providing users with a vegetarian (or vegan!) alternative to their favourite foods, *eatco* aims to use small wins to create a big impact on the issue of global warming. Moreover, with an option to connect their *eatco* account with Spotify we engage our users and make them love the cooking process even more by using their personal song choices, mixed with the flavours of our recipe, to create a personalized playlist for every recipe. ## How we built it For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. There are three main pages the site routes to: the main app, and the login and register page. The login pages utilized a minimalist aesthetic with a CSS style sheet integrated into an HTML file while the recipe pages used React for the database. Because we wanted to keep the user experience cohesive and reduce the delay with rendering different pages through the backend, the main app — recipe searching and viewing — occurs on one page. We also wanted to reduce the wait time for fetching search results so rather than rendering a new page and searching again for the same query we use React to hide and render the appropriate components. We built the backend using the Flask framework. The required functionalities were implemented using specific libraries in python as well as certain APIs. For example, our web search API utilized the googlesearch and beautifulsoup4 libraries to access search results for vegetarian alternatives and return relevant data using web scraping. We also made use of Spotify Web API to access metadata about the user’s favourite artists and tracks to generate a personalized playlist based on the recipe being made. Lastly, we used a mongoDB database to store and access user-specific information such as their username, trees saved, recipes viewed, etc. We made multiple GET and POST requests to update the user’s info, i.e. saved recipes and recipes viewed, as well as making use of our web scraping API that retrieves recipe search results using the recipe query users submit. ## Challenges we ran into In terms of the front-end, we should have considered implementing Routing earlier because when it came to doing so afterward, it would be too complicated to split up the main app page into different routes; this however ended up working out alright as we decided to keep the main page on one main component. Moreover, integrating animation transitions with React was something we hadn’t done and if we had more time we would’ve liked to add it in. Finally, only one of us working on the front-end was familiar with React so balancing what was familiar (HTML) and being able to integrate it into the React workflow took some time. Implementing the backend, particularly the spotify playlist feature, was quite tedious since some aspects of the spotify web API were not as well explained in online resources and hence, we had to rely solely on documentation. Furthermore, having web scraping and APIs in our project meant that we had to parse a lot of dictionaries and lists, making sure that all our keys were exactly correct. Additionally, since dictionaries in Python can have single quotes, when converting these to JSONs we had many issues with not having them be double quotes. The JSONs for the recipes also often had quotation marks in the title, so we had to carefully replace these before the recipes were themselves returned. Later, we also ran into issues with rate limiting which made it difficult to consistently test our application as it would send too many requests in a small period of time. As a result, we had to increase the pause interval between requests when testing which made it a slow and time consuming process. Integrating the Spotify API calls on the backend with the frontend proved quite difficult. This involved making sure that the authentication and redirects were done properly. We first planned to do this with a popup that called back to the original recipe page, but with the enormous amount of complexity of this task, we switched to have the playlist open in a separate page. ## Accomplishments that we're proud of Besides our main idea of allowing users to create a better carbon footprint for themselves, we are proud of accomplishing our Spotify integration. Using the Spotify API and metadata was something none of the team had worked with before and we're glad we learned the new skill because it adds great character to the site. We all love music and being able to use metadata for personalized playlists satisfied our inner musical geek and the integration turned out great so we're really happy with the feature. Along with our vast recipe database this far, we are also proud of our integration! Creating a full-stack database application can be tough and putting together all of our different parts was quite hard, especially as it's something we have limited experience with; hence, we're really proud of our service layer for that. Finally, this was the first time our front-end developers used React for a hackathon; hence, using it in a time and resource constraint environment for the first time and managing to do it as well as we did is also one of our greatest accomplishments. ## What we learned This hackathon was a great learning experience for all of us because everyone delved into a tool that they'd never used before! As a group, one of the main things we learned was the importance of a good git workflow because it allows all team members to have a medium to collaborate efficiently by combing individual parts. Moreover, we also learned about Spotify embedding which not only gave *eatco* a great feature but also provided us with exposure to metadata and API tools. Moreover, we also learned more about creating a component hierarchy and routing on the front end. Another new tool that we used in the back-end was learning how to perform database operations on a cloud-based MongoDB Atlas database from a python script using the pymongo API. This allowed us to complete our recipe database which was the biggest functionality in *eatco*. ## What's next for Eatco Our team is proud of what *eatco* stands for and we want to continue this project beyond the scope of this hackathon and join the fight against climate change. We truly believe in this cause and feel eatco has the power to bring meaningful change; thus, we plan to improve the site further and release it as a web platform and a mobile application. Before making *eatco* available for users publically we want to add more functionality and further improve the database and present the user with a more accurate update of their carbon footprint. In addition to making our recipe database bigger, we also want to focus on enhancing the front-end for a better user experience. Furthermore, we also hope to include features such as connecting to maps (if the user doesn't have a certain ingredient, they will be directed to the nearest facility where that item can be found), and better use of the Spotify metadata to generate even better playlists. Lastly, we also want to add a saved water feature to also contribute into the global water crisis because eating green also helps cut back on wasteful water consumption! We firmly believe that *eatco* can go beyond the range of the last 36 hours and make impactful change on our planet; hence, we want to share with the world how global issues don't always need huge corporate or public support to be solved, but one person can also make a difference.
## Inspiration We recognized how much time meal planning can cause, especially for busy young professionals and students who have little experience cooking. We wanted to provide an easy way to buy healthy, sustainable meals for the week, without compromising the budget or harming the environment. ## What it does Similar to services like "Hello Fresh", this is a webapp for finding recipes and delivering the ingredients to your house. This is where the similarities end, however. Instead of shipping the ingredients to you directly, our app makes use of local grocery delivery services, such as the one provided by Loblaws. The advantages to this are two-fold: first, it helps keep the price down, as your main fee is for the groceries themselves, instead of paying large amounts in fees to a meal kit company. Second, this is more eco-friendly. Meal kit companies traditionally repackage the ingredients in house into single-use plastic packaging, before shipping it to the user, along with large coolers and ice packs which mostly are never re-used. Our app adds no additional packaging beyond that the groceries initially come in. ## How We built it We made a web app, with the client side code written using React. The server was written in python using Flask, and was hosted on the cloud using Google App Engine. We used MongoDB Atlas, also hosted on Google Cloud. On the server, we used the Spoonacular API to search for recipes, and Instacart for the grocery delivery. ## Challenges we ran into The Instacart API is not publicly available, and there are no public API's for grocery delivery, so we had to reverse engineer this API to allow us to add things to the cart. The Spoonacular API was down for about 4 hours on Saturday evening, during which time we almost entirely switched over to a less functional API, before it came back online and we switched back. ## Accomplishments that we're proud of Created a functional prototype capable of facilitating the order of recipes through Instacart. Learning new skills, like Flask, Google Cloud and for some of the team React. ## What we've learned How to reverse engineer an API, using Python as a web server with Flask, Google Cloud, new API's, MongoDB ## What's next for Fiscal Fresh Add additional functionality on the client side, such as browsing by popular recipes
## Inspiration We have all heard about the nutrition and health issues from those who surround us. Yes, adult obesity has plateaued since 2003, but it's at an extremely high rate. Two out of three U.S. adults are overweight or obese. If we look at diabetes, it's prevalent in 25.9% americans over 65. That's 11.8 million people! Those are the most common instances, let's not forget about people affected by high blood pressure, allergies, digestive or eating disorders—the list goes on. We've created a user friendly platform that utilizes Alexa to help users create healthy recipes tailored to their dietary restrictions. The voice interaction allows for a broad range of ages to learn how to use our platform. On top of that, we provide a hands free environment to ease multitasking and users are more inclined to follow the diet since it’s simple and quick to use. ## How we built it The backend is built with Flask on Python, with the server containerized and deployed on AWS served over nginx and wsgi. We also built this with scale in mind as this should be able to scale to many millions of users, and by containerizing the server with docker and hosting it on AWS, scaling it horizontally is as easy as scaling it vertically, with a few clicks on the AWS dev console. The front end is powered by bootstrap and Jinja (JavaScript Framework) that interfaces with a mySQL database on AWS through Flask’s object relational mapping. All in all, Ramsay is a product built on sweat, pressure, lack of sleep and <3 ## Challenges we ran into The deployment pipeline for alexa is extremely cumbersome due to the fact that alexa has a separate dev console and debugging has to be done on the page. The way lambda handles code change is also extremely inefficient. It has taken a big toll on the development cycle and caused a lot of frustrating debugging times.. It was also very time consuming for us to manually scrape all the recipe and ingredients data from the web, because there no open source recipe API that satisfies our needs. Many of them are either costly or had rate limit restrictions on the endpoints for the free tier, which we are not content with because we wanted to provide a wide range of recipe selection for the user. Scraping different sites gave us a lot of dirty data that required a lot of work to make it usable. We ended up using NLTK to employ noun and entity extraction to get meaningful data from a sea of garbage. ## Accomplishments that we're proud of We managed to build out a Alexa/Lambda deployment pipeline that utilizes AWS S3 buckets and sshfs. The local source files are mounted on a remote S3 bucket that syncs with the Lambda server, enabling the developer to skip over the hassle of manually uploading the files to the lambda console everytime there is a change in the codebase. We also built up a very comprehensive recipe database with over 10000 recipes and 3000 ingredients that allows the user to have tons of selection. This is also the first Alexa app that we made that has a well thought out user experience and it works surprisingly well. For once Alexa is not super confused every time a user ask a question. ## What we learned: We learnt how to web scrape implementing NLTK and BeautifulSoup python libraries. This was essential to create of database containing information about ingredients and recipe steps as well. We also became more proficient in using git and SQL. We are now git sergeants and SQL soldiers. ## What's next for Ramsay: Make up for the sleep that we missed out on over the weekend :')
winning
## Inspiration When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body. As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately. ## What it does The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals. ## How we built it Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality. ## Challenges we ran into The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave. ## Accomplishments that we're proud of We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying. ## What we learned One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people. ## What's next for SkinSkan We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
## Inspiration With an ever-increasing rate of crime, and internet deception on the rise, Cyber fraud has become one of the premier methods of theft across the world. From frivolous scams like phishing attempts, to the occasional Nigerian prince who wants to give you his fortune, it's all too susceptible for the common person to fall in the hands of an online predator. With this project, I attempted to amend this situation, beginning by focusing on the aspect of document verification and credentialization. ## What does it do? SignRecord is an advanced platform hosted on the Ethereum Inter-Planetary File System (an advanced peer-to-peer hyper media protocol, built with the intentions of making the web faster, safer, and more open). Connected with secure DocuSign REST API's, and the power of smart contracts to store data, SignRecord acts as an open-sourced wide-spread ledger of public information, and the average user's information. By allowing individuals to host their data, media, and credentials on the ledger, they are given the safety and security of having a proven blockchain verify their identity, protecting them from not only identity fraud but also from potential wrongdoers. ## How I built it SignRecord is a responsive web app backed with the robust power of both NodeJS and the Hyperledger. With authentication handled by MongoDB, routing by Express, front-end through a combination of React and Pug, and asynchronous requests through Promise it offers a fool-proof solution. Not only that, but I've also built and incorporated my own external API, so that other fellow developers can easily integrate my platform directly into their applications. ## Challenges I ran into The real question should be what Challenge didn't I run into. From simple mistakes like missing a semi-colon, to significant headaches figuring out deprecated dependencies and packages, this development was nothing short of a roller coaster. ## Accomplishments that I'm proud of Of all of the things that I'm proud of, my usage of the Ethereum Blockchain, DocuSign API's, and the collective UI/UX of my application stand out as the most significant achievements I made in this short 36-hour period. I'm especially proud, that I was able to accomplish what I could, alone. ## What I learned Like any good project, I learnt more than I could have imagined. From learning how to use advanced MetaMask libraries to building my very own API, this journey was nothing short of a race with hurdles at every mark. ## What's next for SignRecord With the support of fantastic mentors, a great hacking community, and the fantastic sponsors, I hope to be able to continue expanding my platform in the near future.
## Inspiration As students around 16 years old, skin conditions such as acne make us even more self-conscious than we already are. Furthermore, one of our friends is currently suffering from eczema, so we decided to make an app relating to skin care. While brainstorming for ideas, we realized that the elderly are affected by more skin conditions than younger people. These skin diseases can easily transform into skin cancer if left unchecked. ## What it does Ewmu is an app that can assist people with various skin conditions. It utilizes machine learning to provide an accurate evaluation of the skin condition of an individual. After analyzing the skin, Ewmu returns some topical creams or over-the-top-medication that can alleviate the users' symptoms. ## How we built it We built Ewmu by splitting the project into 3 distinct parts. The first part involved developing and creating the Machine Learning backend model using Swift and the CoreML framework. This model was trained on datasets from Kaggle.com, which we procured over 16,000 images of various skin conditions ranging from atopic dermatitis to melanoma. 200 iterations were used to train the ML model, and it achieved over 99% training accuracy, and 62% validation accuracy and 54% testing accuracy. The second part involved deploying the ML model on a flask backend which provided an API endpoint for the frontend to call from and send the image to. The flask backend fed the image data to the ML model which gave the classification and label for the image. The result was then taken to the frontend where it was displayed. The frontend was built with React.JS and many libraries that created a dashboard for the user. In addition we used libraries to take a photo of the user and then encoded that image to a base64 string which was sent to the flask backend. ## Challenges we ran into Some challenges we ran into were deploying the ML model to a flask backend because of the compatibility issue between Apple and other platforms. Another challenge we ran into was the states within React and trying to get a still image from the webcam, then mapping it over to a base64 encode, then finally sending it over to the backend flask server which then returned a classification. ## Accomplishments that we're proud of * Skin condition classifier ML model + 99% training accuracy + 62% validation accuracy + 54% testing accuracy We're really proud of creating that machine learning model since we are all first time hackers and haven't used any ML or AI software tools before, which marked a huge learning experience and milestone for all of us. This includes learning how to use Swift on the day of, and also cobbling together multiple platforms and applications: backend, ML model, frontend. ## What we learned We learned that time management is all to crucial!! We're writing this within the last 5 minutes as we speak LMAO. From the technical side, we learned how to use React.js to build a working and nice UI/UX frontend, along with building a flask backend that could host our custom built ML model. The biggest thing we took away from this was being open to new ideas and learning all that we could under such a short time period! * TIL uoft kids love: ~~uwu~~ ## What's next for Ewmu We're planning on allowing dermatologists to connect with their patients on the website. Patients will be able to send photos of their skin condition to doctors.
winning
## Inspiration In emergency situations like fires, every second counts, and clear communication is crucial. We were inspired by the potential of Boston Dynamics' Spot to assist in rescue operations by bridging language barriers and navigating hazardous environments, made possible with flame-retardant polyurethane foam used to make Spot. The idea of a multilingual robotic assistant that could locate victims and aid firefighters motivated us to create Pyro Machitis. ## What We Learned Developing Pyro Machitis taught us the importance of interdisciplinary collaboration. We gained hands-on experience with robotics integration, real-time language processing, and user interface design. We also learned how to use multiple APIs together to make a functioning application. ## How We Built the Project **Hardware Integration:** Enhanced the Boston Dynamics Spot robot with high-resolution cameras to detect victims in a fire. **Software Development:** Utilized the Bosdyn API to program movement software controlled remotely and relay the video and audio to the controller. **Multilingual Communication:** Leveraged Groq's AI accelerators and Google Cloud Platform to enable real-time translation and communication in over 15 languages. **User Interface:** Created an intuitive UI using HTML, CSS, and Flask for operators to monitor and control the robot effectively. **Backend Systems:** Integrated systems using Bosdyn for robot control, Groq for AI processing, and GCP for data management. ## Challenges We Faced **New Technology:** Working with Boston Dynamics was a first for all of us, and it required us to read and understand complex documentation and conduct extensive testing to make it work as intended. Accessing Spot's hardware had complex requirements that we had to overcome. **Integration Complexity:** Combining hardware and software from multiple platforms required extensive testing and problem-solving, especially since the hardware can function remotely without being physically connected to the front end.
## Inspiration The 2016 Fort McMurray wildfires were the worst in Albertan history. It was a transformational moment for the lives of thousands of Albertans, including us. The impact that this event had on us, combined with our passion for technology, inspired us to come up with novel approaches to disaster response — a field that is relatively lacking in new technology. ## What it does OnSight is a fully integrated end-to-end solution that connects first responders, coordinators, and citizens into one centralized disaster response coordination system. As citizens make 911 calls, the Google Assistant or Amazon Alexa can extract important details which are transmitted to a Realtime Database in the cloud. These reports are processed using the Google Places API to obtain the exact location of the issue. The details of these reports, along with a map showing their locations, are displayed on a web portal. Control Centre personnel can monitor these reports in realtime and relay them to first responders. First responders are equipped with IoT smart glasses, which connect to nearby Wi-Fi networks and pull new data from the database in real time. The data is processed on the edge, where it is intelligently parsed and displayed according to priority. ## How I built it Voice recognition: The conversational functionality of this component was built using Voiceflow and integrated into the Google Assistant and Amazon Alexa. The Google Places API and Firebase Database were integrated into Voiceflow using the REST protocol. Dashboard: We built the dashboard using Vue.js. It pulls data from the database in realtime and dynamically populates the page with the newest and highest priority reports. Along with these reports, a interactive map of the location of the report is displayed. Smart Glasses: The brain of the smart glasses is a NodeMCU ESP8266, which is an IoT-enabled development board that runs on the Arduino framework. The board is connected to an SSD1306 OLED screen, which is reflected and magnified using a series of mirrors and lenses such that it appears in the wearer's field of view. ## Challenges I ran into The biggest challenge we ran into with this project was getting the right magnification for the optics system. Despite our high-school level of optics education, we did not calculated the required lens parameters correctly. As a result, we had to mount our system farther forward on the glasses than we anticipated. Had we chosen a more appropriate magnification, the system could be mounted much farther back, resulting in a slimmer profile. Parsing JSON objects from the Firebase Database on the glasses also proved to be quite difficult, due to the low-level nature of the C programming language used on the NodeMCU. ## Accomplishments that I'm proud of Each member of our team worked to develop separate parts of the system. Not only were we able to get all three parts working, but we were also able to integrate them through various handshake protocols. The planning and coordination of this project is something that we are very proud of. ## What I learned -Various API calls -We got to do all kinds of development, ranging from web all the way to hardware -Optics is hard ## What's next for OnSight -Fix magnification -Natural language processing to streamline the speech recognition process into fewer steps -Location pinging on the glasses to prioritize reports by proximity to the wearer
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
losing
### Problem and Challenge Achieving 100% financial inclusion where all have access to financial services still remains a difficult challenge. Particularly, a huge percentage of the unbanked adults comprise of women [1]. There are various barriers worldwide that prevent women from accessing formal financial services, including lower levels of income, lack of financial literacy, time and mobility constraints as well as cultural constraints and an overall lack of gender parity [1]. With this problem present, our team wanted to take on Scotiabank's challenge to build a FinTech tool/hack for females. ### Our Inspiration Inspired by LinkedIn, Ten Thousands Coffee, and Forte Foundation, we wanted to build a platform that combines networking opportunities, mentorship programs, and learning resources of personal finance management and investment opportunities to empower women in managing their own finance; thereby increasing financial inclusion of females. ## What it does The three main pillars of Elevate consists of safe community, continuous learning, and mentor support with features including personal financial tracking. ### Continuous Learning Based on the participant's interests, the platform will suggest suitable learning tracks that are available on the current platform. The participant will be able to keep track of their learning progress and apply the lessons learned in real life, for example, tracking their personal financial activity. ### Safe Community The Forum will allow participants to post questions from their learning tracks, current financial news, or discuss any relevant financial topics. Upon signing up, mentors and mentees should abide by the guidelines for respectful and appropriate interactions between parties. Account will be taken off if any events occur. ### Mentor Support Elevate pairs the participant with a mentor that has expertise in the area that the mentor wishes to learn more about. The participant can schedule sessions with the mentor to discuss financial topics that they are insecure about, or discuss questions they have about their lessons learned on the Elevate platform. ### Personal Financial Activity Tracking Elevate participants will be able to track their financial expenses. They would receive notifications and analytics results to help them achieve their financial goals. ## How we built it Before we started implementing, we prototyped the workflow with Webflow. We then built the platform as a web application using html, CSS, JavaScript, and collaborated real-time using git. ## Challenges we ran into * There are a lot of features to incorporate. However we were able to demonstrate the core concepts of our project - to make financing more inclusive. ## Accomplishments that we're proud of * The idea of incorporating several features into one platform. * Deployed a demo web application. * The sophisticated design of the interface and flow of navigation. ## What we learned We learned about the gender parity in finance, and how technology can remove the barriers and create a strong and supportive community for all to understand the important role that finance plays in their lives. ## What's next for Elevate * Partner with financial institutions to create and curate a list of credible learning tracks/resources for mentees * Recruit financial experts as mentees to help enable the program * Add credit/debit cards onto the system to make financial tracking easier. Security issues should be addressed. * Strengthen and implement the backend of the platform to include: Instant messaging, admin page to monitor participants ## Resources and Citation [1] 2020. REMOVING THE BARRIERS TO WOMEN’S FINANCIAL INCLUSION. [ebook] Toronto: Toronto Centre. Available at: <https://res.torontocentre.org/guidedocs/Barriers%20to%20Womens%20Financial%20Inclusion.pdf>.
## Inspiration My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals. ## What it does Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches. ## How we built it The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category. ## Challenges we ran into Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea. ## Accomplishments that we're proud of Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building. ## What we learned Coding might not take that long, but making it fully work takes just as much time. ## What's next for Resonance Finish building the model and possibly trying to incubate it.
## Inspiration In 2020, Canada received more than 200,000 refugees and immigrants. The more immigrants and BIPOC individuals I spoke to, the more I realized, they were only aiming for employment opportunities as cab drivers, cleaners, dock workers, etc. This can be attributed to a discriminatory algorithm that scraps their resumes, and a lack of a formal network to engage and collaborate in. Corporate Mentors connects immigrants and BIPOC individuals with industry professionals who overcame similar barriers as mentors and mentees. This promotion of inclusive and sustainable economic growth has the potential of creating decent jobs and significantly improving living standards can also aid in their seamless transition into Canadian society. Thereby, ensuring that no one gets left behind. ## What it does To tackle the global rise of unemployment and increasing barriers to mobility for marginalized BIPOC communities and immigrants due to racist and discriminatory machine learning algorithms and lack of networking opportunities by creating an innovative web platform that enables people to receive professional mentorship and access to job opportunities that are available through networking. ## How we built it The software architecture model being used is the three-tiered architecture, where we are specifically using the MERN Stack. MERN stands for MongoDB, Express, React, Node, after the four key technologies that make up the stack: React(.js) make up the top ( client-side /frontend), Express and Node make up the middle (application/server) tier, and MongoDB makes up the bottom(Database) tier. System Decomposition explains the relationship better below. The software architecture diagram below details the interaction of varying components in the system. ## Challenges we ran into The mere fact that we didn't have a UX/UI designer on the team made us realize how difficult it was to create an easy-to-navigate user interface. ## Accomplishments that we're proud of We are proud of the matching algorithm we created to match mentors with mentees based on their educational qualifications, corporate experience, and desired industry. Additionally, we would also be able to monetize the website utilizing the Freemium subscription model we developed if we stream webinar videos using Accedo. ## What's next for Corporate Mentors 1) The creation of a real mentor pool with experienced corporate professionals is the definite next step. 2) Furthermore, the development of the freemium model (4 hrs of mentoring every month) @ $60 per 6 months or $100 per 12 months. 3) Paid Webinars (price determined by the mentor with 80% going to them) and 20% taken as platform maintenance fee. 4) Create a chat functionality between mentor and mentee using Socket.io and add authorization in the website to limit access to the chats from external parties 5) Create an area for the mentor and mentee to store and share files
winning
## Inspiration Throughout history, we've invented better and better ways of interfacing with machines. The **mouse**, the **VirtualBoy**, the **nub mouse thing** on ThinkPads. But we weren't satisfied with the current state of input devices. As computers become more and more personal and more and more a part of our daily lives, we need better, **more efficient** ways of interacting with them. One of the biggest inefficiencies in modern desktop computing is moving your hands from the keyboard to the track pad. So we got rid of that. Yup. Really. Introducing **asdfghjkl** , a revolution in computer-human interface design patterns, by team "Keyboard as Trackpad". [http://asdfghjkl.co](http://www.asdfghjkl.co) ## What it does Hold down the *Control* key and run your finger across the keyboard. Watch as the mouse follows your commands. Marvel at the time you saved. Hold down the *Shift* key and run your finger across the keyboard. Watch as the page scrolls under your command. Marvel at the time you saved. ## Challenges I ran into Using my computer. ## Accomplishments that I'm proud of Using my computer. ## What I learned How to better use my computer. ## How I built it After getting the MVP working, I exclusively used **asdfghjkl** for navigation and input while developing the app. It's built in Swift 2.0 (for the easy C interoperability) and partially Obj-C for certain functions. ## What's next for **asdfghjkl** Apple partnership is in the works. NASA partnership is going smoothly; soon the inhabitants of the ISS will be able to get more done, easier, thanks to mandatory **asdfghjkl** usage. ## Info The correct way to pronounce **asdfghjkl** is "asdfghjkl". Please don't get it wrong. Additionally, the only way to type **asdfghjkl** is by sliding your finger across the entire home row. Just don't hold *Control*, or your mouse will fly to the right!
## Inspiration We wanted to create an interactive desktop with the concept of space involved just as in the days before computers became a common workplace tool. We went for the futuristic approach where icons and files can be grabbed and interacted with in virtual reality. ## What it does It places the user in a 3D virtual reality environment, and provides icons that can be interacted with via hand gestures. Different gestures manipulate the icons in different ways and gesture to action control is covered by scripts inserted into Unity ## How I built it We attached a Leap-Motion sensor to the front of an Oculus Rift, and ran the program through Unity and scripts with C#. The sensor is responsible for input from the arms and the rift itself creates the environment. ## Challenges I ran into We ran into major hardware compatibility issues that were compounded by sdk issues between the hardware. Furthermore, on our rented Alienware laptop we couldn't install all the sdks until later on in the project because we didn't have administrative rights. Furthermore, the documentation pages and tutorials were at times different, with small updates to names of functions that we had to figure out. ## Accomplishments that I'm proud of -Linking the hardware -Figuring out gesture controls -Designing the landscape -Getting icons to interact ## What I learned -Never give up on an idea -VR is cool ## What's next for INvironment -Fleshing out further ideas -Adding more features -Smoothing and improving interaction
## Inspiration As a mobile developer, I was sick of coding user interfaces. These days, if you're creating a chat app, for example, you'll probably find by far the longest part is designing and coding the user interface. I wanted to find a way to speed this process up. ## What it does UICode is an iPad app that has a mobile screen as a canvas. You can drag and drop mobile elements such as Views (UIView), buttons (UIButton) and images, then quickly align and adjust their properties, for instance, color, corner radius, and shadows to name a few. Next, you can resize the mobile canvas to see how your UI will respond in real time to different phone sizes. This saves developers a lot of time, as previously, one would have to essentially "guess" the coordinates of an element they were placing, then wait for their code to compile to see the result. ## How I built it UICode is built in pure Swift using Xcode. I made use of a few notable iOS libraries and frameworks, such as Snapkit, and a few smaller open source projects for managing minor aspects such as gradients. ## Challenges I ran into I ran into quite a few challenges. Originally, UICode was intended to be a Mac app, similar to the UI program, Sketch, however, I found developing for iOS to be infinitely times easier for me so I just went with an iPad app instead. Another challenge was coding the logic for aligning elements with each other. It became fairly complicated when a user drags an element, UICode had to determine when and where was appropriate to snap a view. ## Accomplishments that I'm proud of I'm proud that I've taken an idea that I wanted to exist, and brought it into existence as a tangible product. This is a product I am going to be happy to use for my app development. ## What I learned I learned a lot about constraints and handling views with gestures. UICode has a UIScrollView as it's canvas, then inside that it has multiple elements that can be dragged around and manipulated. This created a level of complexity in terms of when what gesture would be triggered (e.g scroll view or dragging an element), that I had to learn to navigate to ensure a smooth user experience. ## What's next for UICode I plan to keep adding features to UICode and releasing it on the AppStore. It will be free to use, however, if you want to export your UI Designs to code you will have to purchase a subscription.
losing
## Inspiration Philadelphia, like many urban cities, is grappling with rising temperatures due to climate change, industrialization, and the urban heat island effect. We noticed that extreme heat is making it unsafe for many communities, especially during summer months. Chilladelphia was inspired by the need to provide residents with real-time resources and actionable insights to help them stay cool and safe. ## What it does Help cool down Philly! The main page features a heat map that visually highlights the hottest and coolest areas around Philadelphia. By entering your address, you can instantly see how “chill” your neighborhood is. Using our computer vision algorithm, we analyze the ratio of greenery in your area, giving you a personalized chill rating. This rating helps you understand the immediate state of your environment. Chilladelphia goes beyond just information—it provides actionable suggestions like planting trees, painting rooftops lighter, and other eco-friendly tips to actively cool down your community. Plus, you can easily find nearby cooling centers, water stations, and shaded areas to help you beat the heat on the go ## How we built it We built Chilladelphia with a strong focus on user experience and seamless access to location based data. For user authentication, we integrated **Propel Auth**, which provided a quick and scalable solution for user sign-ups and logins. This allowed us to securely manage user sessions, ensuring that personal data, like location preferences, is handled safely. On the frontend, we used **React** to create a dynamic and responsive user interface. This enabled smooth interactions, from entering an address to viewing real-time temperature and air quality updates. To style the app, we utilized **Tailwind CSS**, which allowed us to rapidly prototype and design components with minimal code. **Axios** was implemented for handling API requests, efficiently fetching environmental data and user-specific suggestions. The frontend also leverages **React Router** to manage navigation, making it easy for users to explore different parts of the app. For the backend, we set up a **Node.js** server with **Express** to handle API requests and data routing. The core of our data storage is **MongoDB**, where we store geospatial information like cooling center locations and tree-planting sites. MongoDB’s flexibility allowed us to efficiently store and query data based on the user’s location. We also integrated external APIs to get coordinates and map data. To manage authentication securely across both the backend and frontend, we utilized **Propel Auth** to handle user session tokens and login states. For the data generation, we used python to compile images of university city by downloading sections of university city from sattelite images. We then use DetecTrees, a Python library that uses a pre-trained model to identify tree pixels from aerial images. We then were able to calculate what percentage of the image was green space to give users an idea of how green the area around them is. ## Challenges we ran into One of the biggest challenges was getting high resolution satellite imagery that would work well for our purposes. After testing out over 5 different APIs, we ended up having to wrap a google maps scraper, which worked best for our needs. ## Accomplishments that we're proud of We’re proud of creating a solution that can have real impact in our neighboring Philly communities. The recent heat waves in the northeast have been dangerous and put our peers and community at risk, and we are excited to take steps in the right direction to mitigate the issue. ## What we learned We've expanded our tech stack -- several of us used MongoDB, Express.js, PropelAuth, and many other tools for the first time this weekend. ## What's next for Chilladelphia Next, we plan to scale Chilladelphia by integrating more data - we had limited storage in our database and weren't able to cover as much of Philly as we wanted to, but we hope to do more in the future! We also want to partner with local governments and environmental organizations to further expand the app's resource database and promote city-wide efforts in cooling down Philadelphia.
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
## Inspiration The inspiration for our project stemmed from a mild, first-world frustration with modern weather websites. Sure, one glance at the temperature can tell you a lot about the weather, but there are so many other factors that we commonly ignore which can also affect our day. What if it's unbearably humid? Or, what if there's an on-and-off chance of rain throughout the day and you don't know how to prepare? These questions and more led us to develop a web application that not only provides you with your typical hourly forecast, but also processes that data into more meaningful and digestible information about your day. ## What it does Our application first retrieves an hourly forecast from OpenWeatherMap's One-Call API. It displays the forecast for the next 12 hours on the left side of the webpage, and it also processes the raw weather data into simpler chunks. Each chunk contains highlights about the weather for a three-hour interval, and dynamically provides advice on how to best prepare for the day. ## How we built it We used ReactJS and the React-Bootstrap library for the frontend of our application. For the backend, we used Node.js, Axios for API requests, and OpenWeatherMap's APIs. ## Challenges we ran into The most difficult portion of our project was processing the data from each API call. This was the first time that any of us have worked with APIs, so finding the right data, refining it, using it efficiently, and displaying it was a hurdle in our workload. In addition to this, we had to complete lots of research on how to prepare for different types of severe weather conditions which also occupied a large portion of our time. ## Accomplishments that we're proud of and what we learned We are very proud that we started this project with little knowledge about APIs, and in the end, we were able to manipulate the API data however we liked. We are also proud of the sleek, single-page design of our application and its overall aesthetic. ## What's next for Smart Day Given the time constraints, we were not able to fulfill some of our goals. The next steps for our project includes reverse geocoding the user's location to display their city/region, allowing the user to select their location from a database, and displaying a larger variety of tips for different weather conditions.
winning
## Inspiration We were inspired by issues we had ourselves with COVID-19 - we learned a ton about tech and coding and wanted to share our passion with others in software development, but hit a wall in finding others like ourselves. We were all too familiar with recruiting someone for a hackathon, only to have them ghost the team and scrambling last minute, or finding others to work on an opensource project with and then make all of the contributions. Simply finding someone with similar interests was difficult - you could go to hackathons, but what if you want to work more long-term? Enter Open4Collab - our solution to this problem. COVID-19 has taught many a new skill - but at the same time, made it apparent that finding dedicated collaborators on projects is difficult. Current social media is wonderful for meeting new people, but so dependent on first impressions - are they from your hometown, and do they have a pretty face. Open4Collab takes the first impressions out of a new person and focuses on what really counts - their projects. Whether it’s learning together, building the next big startup, or looking for developers to start an open-source project, Open4Collab uses machine learning to cluster similar projects with you. ## What it does Open4Collab is a platform that, unlike current social media, allows for more active collaboration and in turn creates a lot more engagement. It does this by using the skills and interests you've listed to cluster you with projects that want to work with. It also works the other way, you can create a project where you require people with certain skills and get those people! Our model is easy to understand, it's not a black box like seen in so many sites today, you know what data you're giving. We want you to be aware that your skills are used with a k mean model in order to find projects that fit their specific interests and skill set. To understand which technologies are related, we downloaded all of StackOverflow's tags to find the correlation between them. If two technologies had questions asked together, they were more similar and should be matched together. Based on this, we generate a list of suitable projects for you which you can then contact the owner of. We believe social media should be engaging in people's lives in a positive way and it can do that by being a simple and transparent tool that encourages people to collaborate and connect. ## How we built it In order for a chance at the @ Company prize, Open4Collab was built using the Flutter UI Framework, the @ platform and Firebase. The @ platform was used to give everyone a unique sign. Cloud Firestore was used to store project data and handle real-time updates. This ensured that our application would scale and stay responsive even with massive amounts of project data. The model used correlations between stackoverflow tags - if a question contained tags of 2 technologies, they were deemed similar. Each set of technologies specified by the user were given a similarity score, which was then minimized to give a more relevant suggestions page. This was deployed as a flask API through AppEngine. ## Challenges we ran into Using the @ protocol and a service like Firebase together while ensuring that the @ companies beliefs are still respected. We planned out the app so that user data is not stored on the cloud and instead managed using the @ platform, however, relevant data used for the cluster on the GCP still be stored there. We would have preferred was a solution that utilized GCP cloud functions that tied to the app platform in a permission-based manner but could not find support for this with the limited time. ## Accomplishments that we're proud of Integrating the @ platform with are project was difficult but we kept working on it even after recording our demo and we eventually got it working (see project media). Also being able to successfully set up a Flutter UI and connect Firebase is something we're proud of. Most of us were not familiar with Flutter but now have a better understanding of its purpose and why it's growing in popularity. ## What we learned We learned about Flutter and gained a better understanding of the Google Cloud Platform. ## What's next for Open4Collab - Social Media for Developers We'll improve UI and add more features based on user feedback. When we started, we set out with the task of making a platform that makes it possible to find other dedicated users. We would love to add a feedback/rating system to facilitate this.
## Intro Have an upcoming exam? Planning a wedding? Or have a tight deadline? "I'm really stressed right now! Help!" Well, with LavÜ, we've gotchyou! ## Inspiration After experiencing stressors in everyday life and speaking with members of the community, we found that many people experience stress physically—particularly, through tightness in the neck and shoulders. Research backs this up. A study by Jacqueline Wijsman et al. published in Wireless Health found "significantly higher amplitudes of the EMG signals [from the Trapezius muscles] during stress compared to rest and fewer gaps (periods of relaxation) during stress," making it a useful indicator of stress in real time. ## What it does **LavÜ Device**: A **wearable electromyogram (EMG) sensor** that monitors muscle tension and sends **haptic feedback**. By analyzing trends in muscle tension over time, LavÜ detects changes in stress levels. Using this data, we can notify users through a gentle tap if it's time to do haptic-assisted breathing exercises or take a break. **LavÜ App**: In addition, the LavÜ App, displays a chart of your stress levels over time throughout the day. The app provides features to take care of your mental health reducing your stress levels such as providing journal entries, nutritional values, breathing exercises, and more. ## How we built it LavÜ is powered by the Nicla Sense ME microcontroller and a LiPo battery, while a Gravity EMG Sensor takes measurements of muscle tension. A DRV2605 Haptic Driver assists in generating gentle vibrations. The device itself is enclosed in a flexible 3D printed chassis, which is sewn into clothes for comfort. ## Challenges we ran into Creating a hardware project results in many practical challenges. * Powering the device (working with multiple power sources) * Mounting the device - ensuring that proper contact is made between the sensor and the body * Writing code to interpret data from the EMG * Developing haptic breathing sequences ## Accomplishments that we're proud of We are extremely proud of the ability to record data that can be converted into stress levels in real time. Additionally, we were able to design an interactive prototype of our envisioned app that goes hand-in-hand with the device. ## What we learned Working on a project that requires both technical aspects of both hardware and software requires a strong understanding of how we can integrate the data together. Coming from different backgrounds, we learnt how to collaborate cohesively and efficiently to build this project. We also have a better understanding of the users we design for and the various forms of stress relieving exercises. We have thoroughly enjoyed this project as it has given us multiple perspectives of technology. ## What's next for LavÜ * Create greater awareness of points system that can provide positive reinforcement * ML model that can learn stress patterns from each individual for personalized detection and feedback * Include better accessibility software on different mobile applications Try out LavÜ for a better you!
## Inspiration A great mentor can be instrumental for us to grow as a person in life. A mentor can be anyone, someone giving us career advice, or someone helping us learn to cook the most delicious meal we have ever eaten. We often interact with people and talk about our skills, or how they could help us learn new skills. Although we are living in a digitized world, a certain knack can only be discovered via personal interactions. Most of us learn more from real mentors as compared to online videos. Especially for students, stay at home parents and elderly people, we believe that a real-life mentor would be far more impactful. Many times it is difficult to register for online or group courses because they are extremely costly and require long term commitment, while we might just need a few hours of mentoring in order to acquire a new skill. With this in mind, we have created an application to help people connect and learn from each other and thus grow as a society, without the financial and logistic barriers. Google democratized information, we wish to democratize skills. ## What it does A centralized platform where users connect and network with talented people & skill teachers in the local community. The application enables a user to indicate interests, connect with the right people in the locality and receive personalized training from talented people as well as share their skills with others. It is different from other platforms as most of them lack the personal connection and specificity, are mostly career focused, or oriented towards learning technology and fail to establish personal connections. Our platform also rewards the mentors in credits, which can then be used to schedule a session as a mentee. Thus, user retention is maintained via a continuous sharing of knowledge and skills. The user is always informed of their progress and activity on the platform with the help of lucid data visualizations. ## How we built it We have built our application using React and Firebase. We have used Nivo for data visualization. Firebase has also been used for hosting the application and user authentication. ## Challenges we ran into Being newbies in full-stack development, initially, designing and stitching things together was the challenge. As we progressed we faced gradual impediments and questions that led to some more iterations than expected. ## Accomplishments that we're proud of We are proud of the impact the application can have on society. With SkillEd, we present before you a platform, which brings a personal touch to knowledge sharing in this world where everyone is desperately looking into a black mirror. We believe that easier access to education and knowledge sharing is important for any society to thrive. With our application, we strive to bring technology to education and mentorship for a better future. ## What we learned We learned a lot of things while building this app: 1. Web app development 2. Hosting web apps 3. Front and Back end development 4. And definitely Karate! ## What's next for SkillEd Our goal is to break down obstacles to non-traditional skills-based education, reinvigorate traditional educational platforms by promoting skill diversification and support mentorship and networking between community members. The journey doesn't end here. We aim to take SkillEd to another level and make it running in production. There are a bunch of things that we would like to focus on. * Develop a system in which users can transact with SkillEd credits * Integration with GeoSpatial API to discover local mentors/mentees easily * Intelligent mentor discovery with machine learning * Recommendation system for skills, venues, and people * Android and iOS applications * Integration with Google calendar * Recommend resources (Amazon marketplace)
partial
## Inspiration The world of crypto is very flashy. As blockchain innovation becomes more popular, being creative requires more complicated and nuanced features. But what about those who experience the web differently? For the visually impaired, flashy and complicated crypto solutions hardly matter. What matters is an accessible, easy-to-use wallet management solution. ## What it does SightChain allows users to manage Bitcoin wallets with nothing but their voice. By taking a phone call, users can send Bitcoin to an address of their choosing and manage their wallet balance. ## How we built it SightChain was built using Dasha AI and Blockchain.com's Wallet API in JavaScript. ## What we learned Taking a moment to consider everybody can bring forth ideas that can have a huge impact on people's lives. We learned that innovation doesn't have to mean never-before-seen features or gimmicks. Innovation can be as simple as using your skills to help people. ## What's next for SightChain SightChain's next steps include more cohesiveness between the wallet hosting and running Dasha and more accessible features over the phone. ## Trying SightChain Instructions to run SightChain are available on the readme page on GitHub.
## Inspiration As we sat down to brainstorm ideas for our next project, we were struck by a common thread that connected all of us. Each one of us had a family member who suffered from some form of visual impairment. It was a heart-wrenching reminder of the challenges that these individuals face on a daily basis. We shared stories of our loved ones struggling to read books, watch movies, or even navigate through everyday tasks. It was a deeply emotional conversation that left us feeling both empathetic and determined to make a difference. According to the World Health Organization, approximately 2.2 billion people worldwide have a vision impairment or blindness, with the majority of cases occurring in low and middle-income countries. The impact of visual impairment is far-reaching and significantly affects various daily activities such as reading, recognizing faces, navigating unfamiliar environments, and accessing information on digital platforms. This problem is valid, and it needs to be addressed to enhance the quality of life of those affected. We are passionate about developing a solution that will make a meaningful difference in the lives of those affected by visual impairment. Our project is inspired by personal experiences and fueled by a desire to make a real-world impact. We believe that everyone deserves equal access to information and the ability to participate fully in daily life. By addressing the challenges of visual impairment, we hope to create a more inclusive world for all. ## What it does The product aims to bridge the gap for individuals with limited vision to experience the world around them. It helps individuals with visual impairments to perform various daily activities that are otherwise challenging, such as reading, recognizing faces, and navigating unfamiliar environments. It also assists in accessing information on digital platforms. The product can be particularly helpful for those who face barriers in accessing healthcare services due to their visual impairments. It can aid in reading prescription labels, understanding medical instructions, and navigating healthcare facilities, especially for older individuals who are aging. ## How we built it In our project, we leverage cutting-edge computer vision techniques to interpret the surrounding environment of individuals with visual impairments. By utilizing advanced algorithms and neural networks, we process real-time visual data captured by a camera, enabling us to identify and analyze objects, obstacles, and spatial cues in the user's surroundings. We integrate state-of-the-art language models and natural language generation powered by Wisp AI software to bridge the gap between the interpreted world and the user. This allows us to generate detailed and contextually relevant descriptions of the environment in real time, providing visually impaired individuals with comprehensive auditory feedback about their surroundings. Additionally, our solution extends beyond descriptive capabilities to enhance accessibility in public transportation. By leveraging the interpreted environmental data, we develop guidance systems that assist users in navigating through streets and accessing transportation hubs safely and independently. For efficient and scalable deployment of our model, we utilize Intel's AI environment, leveraging its robust infrastructure and resources to host and optimize our machine learning algorithms. Our system architecture is implemented on a Raspberry Pi embedded platform, equipped with a high-resolution camera for real-time visualization and data capture. This combination of hardware and software components enables seamless integration and efficient visual information processing, empowering visually impaired individuals with enhanced mobility and independence in their daily lives. ## Challenges we ran into As beginners in machine learning, we faced the tough challenge of setting up a machine learning model on a Raspberry Pi and connecting it to a camera, which was quite difficult to learn. Moreover, we had to figure out a way to train our model not only to understand text but also to recognize public transportation and calculate the distance to a bus entrance, which was quite a task. Adding our Intel-AI environment to the project made things even more complicated. Additionally, finding an affordable solution that could be easily accessible to people all around the world was a significant obstacle that we had to overcome. ## Accomplishments that we're proud of Through this process of building a hardware product from scratch and learning how to use raspberry pi with computer vision, we not only gained technical knowledge but also learned how to work as a team. There were challenges and obstacles along the way, but we figured it out by collaborating, communicating, and leveraging each other's strengths. It was a great learning experience, and we are proud of what we have achieved together. ## What we learned We learned about LLM, real-time text analysis, real-time text comprehension, and implementation of text-to-speech. ## What's next for True-Sight With the growing potential of Artificial Intelligence, our idea of True-Sight is expanding to include not only text recognition but also the ability to detect surroundings, which could greatly benefit public transportation users who rely on finding stops and navigating their way onto the correct buses/trains. After further development, True-Sight could potentially allow users to locate their desired stop and use environment detection to guide them towards the door with specific step-by-step instructions. In addition, we aim to make True-Sight accessible to children who are visually impaired, so they can have an immersive learning experience. Adding sensors and custom software will also allow for a more personalized and relatable experience with the AI assistant.
## Inspiration Cryptocurrency is the new hype of our age. We wanted to explore the possibilities of managing Cryptocurrency transactions at the tips of our fingers through social media outlets. At the same time, we wanted to tackle the problem of splitting bills when we eat out with friends, through sending Ethereum to settle payments. ## What it does Our bot has 6 main commands that can be used after setting up with your Facebook account & the public key of your EtherWallet via cryptpay.tech and installing the application to your local computer: * /send - sends a set amount to designated user. * /confirm - accepts payment on receiver's end. * /split - splits bill to number of people in chat. * /dist - distributes amount per person. * /receipt - takes picture of receipt and splits bill based on user's prompts. * /sell - sells amount to market. We use these commands on the FB chat to facilitate real time transactions. ## How We built it With security in mind and developing around the spirit of decentralization - a user's wallet/private key never leaves their computer. The architecture of the entire project, as such, was more difficult than your average chatbot. There are 3 main components to this project: ##### Local Chatbot/Wallet If we hosted a central chatbot that managed everyone's funds, that would have destroyed the purpose of using cryptocurrency as our medium. As such, we developed a chatbot/wallet hybrid that allows users to have the full functionality of a server-sided bot, right in their hands and in control. We had the user input their wallet details, and by using offline transaction signing, users are not required to run a full Ethereum node but still interact with the blockchain network using Messenger. ##### CryptPay.tech Lets say `Person A` wants to send a payment of $10 to `Person B` using CryptPay. `Person A` will have to send a transaction to `Person B`'s public key (which can be thought of as their house address). CryptPay.tech allows friends to find each other's public keys, without even asking for them beyond the one-time setup. This means, you don't have to ask for their email address nor their long hexadecimal public key. We do it for you. ##### Receipt Scanning + Other Features Any user can use the /receipt command to prompt the receipt bill splitting function. CryptPay will ask the user to take a photo of their recent receipt transaction and analyze the purchases. Using the Google Vision API and Tesseract OCR API, we are able to instantaneously identify the total amount of the purchase. The user can then use /split to equally distribute the bill to each member in the chat. ## Challenges We Ran Into Originally, we contemplated creating a messenger bot for transactions with real money. However, this elicits substantial security issues, since it is not secure for third parties to hold people's private banking information. We spoke with representatives from Scotiabank about our concerns and asked for other possible issues to tackle. After discussion, we decided to use Cryptocurrency transactions because they bypass the Interac debit system and everything is fluid. ## Accomplishments that We're Proud of * Learning how to use Facebook Messenger API * Creating a packaging a full node application for end-users * Learning to architect the project in a unconventional way * Exploring REST * Setting up fluid transactions with Ethereum * Having a fully functional prototype within 24 hours * Creating something that is easy to use and that everyone can use ## What's next for CryptPay * Adding more crypto coins * Getting a chance to cancel your sending * Have a command for market research
losing
## Inspiration Our inspiration for this project was the vast number of individuals in the world who struggle with, or are physically incapable of, cleaning various surfaces and hard-to-reach areas in their house, such as under cabinets, or on tables. Our project was meant to be a way to make cleaning easier, and more accessible to everyone. ## What it does Our project acts as an intuitive and simple way to clean hard-to-reach areas in one's household. To operate, the user simply puts on the Smart Glove Controller, which is equipped with a finger-based flex sensor, and a 3-D gyroscopic sensor. Together, these sensors detect the angle of the user's hand, to determine the motion of the cleaner, and the bend of the user's index finger, to toggle the spinning sponge. The Cleaning Robot itself is equipped with a spinning sponge, which can be used to dust and clean hard-to-reach places. ## How we built it The project consists of 3 main integrated builds: The Smart Controller Glove, The Remote Rover, and the Wireless Transceiver Communication Modules. **The Smart Controller Glove** The glove controller centers primarily around two Arduino components: a flex sensor, and an MPU6050 gyroscope/accelerometer module. The MPU6050 takes in raw data from the orientation of it in space, such as angular velocity, and directional acceleration. Using these values, an Arduino Nano Microcontroller performs calculations involving trigonometry and integration to determine the angle of the gyroscope. This angle has corrections applied to it before being transmitted out. The flex sensor determines when the user's index finger is bent, and sends a signal accordingly. **The Remote Rover** There are 3 motors on the rover. This consists of 2 hobby driving motors and the single motor. The two hobby motors are connected through different pins in a L298N motor driver and into an arduino uno and are fed information based on if the desired direction is forward, backwards, right, left, or stop. The motors are fed an analytical read signal which determines the desired speed. The motor for the sponge is controlled through a transistor and a diode to limit the feedback voltage into the same arduino uno as the hobby driving motors. The motor is then fed a value which controls the speed that it rotates at. **Wireless Transceiver Communication Modules** The Wireless Transceivers established a connection between a one-way transmitter on the glove and a one-way receiver on the rover. After determining the pieces of data that needs to be transmitted to control the rover (namely, the gyroscope roll and pitch, and the flex sensor data), we send the sensor data through the channel. After the receiver obtains the sensor data, the rover does calculations to determine what signals to send to the motor controllers. ## Challenges we ran into The mechanical features of our rover gave us a substantial amount of challenge throughout the hackathon. At first, we needed male-to-female jumper wires for a large amount of components to be connected (such as the radio and the motor controller). This resulted in a lot of wire management that needed to be managed on the relatively small chassis. In addition, implementing the motor controller in a way that keeps the wheels balanced was also difficult, as we did not have an axle to ensure the wheels moved together. This meant we had to determine the correct ratio to ensure our rover could move forward straight. Additionally, finding the a proper correction ratio for the gyroscope angle proved challenging, and required numerous iterations and adjustments. ## Accomplishments that we're proud of We are proud to have completed our full plan, not having to cut aspects of the design out. Additionally, we are proud with how well-coordinated our team was throughout the process. As opposed to having trouble trying to integrate our components, our team coordinated and communicated throughout the 24 hour design process, which allowed for seamless integration: taking less that 20 minutes to connect the 3 major components together. Additionally, this was our first makeathon, and we are very proud of our success in designing something challenging, yet very rewarding. ## What we learned As ECE students, it was definitely a challenge taking the mechanical components into account when creating our physical models. Working with the rover there were many issues getting the wheels to move on the proper axis. With limited equipment, we had to connect our components in a way that the correct amount of voltage was distributed to each part. There was a lot of desired documentation, opensource code, and datasheets that are available to us that we tested with. Using physics to calculate data in 3D space was an unexpected aspect that was taken on during the process. We learned end to end hardware to software integration using Ardiuno. ## What's next for HandiClean If we continue this project, there are a few major changes we would make: We would change the chassis of the rover to allow the connections in a much smoother, and balanced manner. This way, we would not need to worry about weight distribution as much, and the wheels and motors can be placed in more convienent locations Additionally, we would use a larger glove for our controller, as it would allow more room to place our components. A minor issue that appears with the glove was that it was very cramped, making this difficult to debug any wiring issues, and made the glove a bit difficult to work with.
## Inspiration Both of us have an interest in the mechatronics field and thought it would be fun to do something together over the reading week. The idea for the hand came about as we thought it would be a good challenge for us to try and work through while still being attainable. ## What it does The project is in 2 main parts. One part is a glove that is worn with flex sensors attached along the fingers. The Arduino Nano's analog pins read the change in voltage as the flex sensor bends and determines where the finger is at. The second part, a 3D printed hand controlled by 5 servos then takes over. The servos pull each 3D printed finger individually to mimic the motion of the gloved hand in real time. ## How we built it Each of us played to our strengths. The Mechanical engineering student worked on designing the hand in Solidworks while the ECE student worked on the circuitry that would control the set up. Once the 3D model of the hand was modeled it was printed out and assembled along with the rest of the parts. ## Challenges we ran into Turning the idea of a hand into an actual working model was harder than expected. Who knew the hand had so many joints. Additionally, printing bendy things is not a strong suit of 3D printers and getting the hand to go and then stay together was a struggle. We were able to get it to work however but would change the design if we would do it again. ## Accomplishments that we're proud of We are simply proud that we are able to demo the project. Neither of us has a ton of experience and it was just nice to go through a project in a couple of days and get it done. A great experience and one we hope to use in the future. ## What we learned We learned a couple of key things. 1. We learned that describing some things virtually is extremely hard even with video chats available. 2. We learned how to better design 3D models to move smoothly when printed in plastic ## What's next for Grab Hand If this project were to get redone it would need a couple things. 3. It would need a new model of the hand to better model the real motion of a human hand. 4. Possibly improve the measurement technology from flex sensors, while they work they have low precision and are hard to easily mount to the hand. (possibly use hall sensors??) 5. The glove is currently quite cumbersome with all of the wires attached. Adding a second arduino to the glove and making the whole thing wireless would add to ease of use and improve the design.
## Inspiration Minecraft has an interesting map mechanic where your character holds a map which "draws itself" while exploring the world. I am also very interested in building a plotter, which is a printer that uses a pen and (XY) gantry to produce images. These ideas seemed to fit together quite well. ## What it does Press a button, copy GPS coordinates and run the custom "gcode" compiler to generate machine/motor driving code for the arduino. Wait around 15 minutes for a 48 x 48 output. ## How we built it Mechanical assembly - Tore apart 3 dvd drives and extracted a multitude of components, including sled motors (linear rails). Unfortunately, they used limit switch + DC motor rather than stepper, so I had to saw apart the enclosure and **glue** in my own steppers with a gear which (you guessed it) was also glued to the motor shaft. Electronics - I designed a simple algorithm to walk through an image matrix and translate it into motor code, that looks a lot like a video game control. Indeed, the stepperboi/autostepperboi main source code has utilities to manually control all three axes like a tiny claw machine :) U - Pen Up D - Pen Down L - Pen Left R - Pen Right Y/T - Pen Forward (top) B - Pen Backwards (bottom) Z - zero the calibration O - returned to previous zeroed position ## Challenges we ran into * I have no idea about basic mechanics / manufacturing so it's pretty slipshod, the fractional resolution I managed to extract is impressive in its own right * Designing my own 'gcode' simplification was a little complicated, and produces strange, pointillist results. I like it though. ## Accomplishments that we're proud of * 24 hours and a pretty small cost in parts to make a functioning plotter! * Connected to mapbox api and did image processing quite successfully, including machine code generation / interpretation ## What we learned * You don't need to take MIE243 to do low precision work, all you need is superglue, a glue gun and a dream * GPS modules are finnicky and need to be somewhat near to a window with built in antenna * Vectorizing an image is quite a complex problem * Mechanical engineering is difficult * Steppers are *extremely* precise, and I am quite surprised at the output quality given that it's barely held together. * Iteration for mechanical structure is possible, but difficult * How to use rotary tool and not amputate fingers * How to remove superglue from skin (lol) ## What's next for Cartoboy * Compacting the design it so it can fit in a smaller profile, and work more like a polaroid camera as intended. (Maybe I will learn solidworks one of these days) * Improving the gcode algorithm / tapping into existing gcode standard
losing
![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png) # What is gitStarted? GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules. ## Installation To install: ``` npm install ``` ## Usage To run: ``` gulp ``` ## Credits Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)! Made with :heart: in Kingston, Ontario for QHacks 2016
## Inspiration Our idea was inspired by our group's shared interest in musical composition, as well as our interests in AI models and their capabilites. The concept that inspired our project was: "*What if life had a soundtrack?*" ## What it does AutOST generates and produces a constant stream of original live music designed to automatically adjust to and accompany any real-life scenario. ## How we built it We built our project in python, using the Mido library to send note signals directly to FL studio, allowing us to play constant audio without a need to export to a file. The whole program is linked up to a live video feed that uses Groq AI's computer vision api to determine the mood of an image and adjust the audio accordingly. ## Challenges we ran into The main challenge we faced in this project is the struggle that came with making the generated music not only sound coherent and good, but also have the capability to adjust according to parameters. Turns out that generating music mathematically is more difficult than it seems. ## Accomplishments that we're proud of We're proud of the fact that our program's music sounds somewhat decent, and also that we were able to brainstorm a concept that (to our knowlege) has not really seen much experimentation. ## What we learned We learned that music generation is much harder than we initially thought, and that AIs aren't all that great at understanding human emotions. ## What's next for AutOST If we continue work on this project post-hackathon, the next steps would be to expand its capabilities for recieving input, allowing it to do all sorts of amazing things such as creating a dynamic soundtrack for video games, or integrating with smart headphones to create tailored background music that would allow users to feel as though they are living inside a movie.
## Inspiration The inspiration for GithubGuide came from our own experiences working with open-source projects and navigating through complex codebases on GitHub. We realized that understanding the purpose of each file and folder in a repository can be a daunting task, especially for beginners. Thus, we aimed to create a tool that simplifies this process and makes it easier for developers to explore and contribute to GitHub projects. ## What it does GithubGuide is a Google Chrome extension that takes any GitHub repository as input and explains the purpose of each file and folder in the repository. It uses the GitHub API to fetch repository contents and metadata, which are then processed and presented in an easily understandable format. This enables developers to quickly navigate and comprehend the structure of a repository, allowing them to save time and work more efficiently. ## How we built it We built GithubGuide as a team of four. Here's how we split the work among teammates 1, 2, 3, and 4: 1. Build a Chrome extension using JavaScript, which serves as the user interface for interacting with the tool. 2. Develop a comprehensive algorithm and data structures to efficiently manage and process the repository data and LLM-generated inferences. 3. Configure a workflow to read repository contents into our chosen LLM ChatGPT model using a reader built on LLaMa - a connector between LLMs and external data sources. 4. Build a server with Python Flask to communicate data between the Chrome extension and LLaMa, the LLM data connector. ## Challenges we ran into Throughout the development process, we encountered several challenges: 1. Integrating the LLM data connector with the Chrome extension and the Flask server. 2. Parsing and processing the repository data correctly. 3. Engineering our ChatGPT prompts to get optimal results. ## Accomplishments that we're proud of We are proud of: 1. Successfully developing a fully functional Chrome extension that simplifies the process of understanding GitHub repositories. 2. Overcoming the technical challenges in integrating various components and technologies. 3. Creating a tool that has the potential to assist developers, especially beginners, in their journey to contribute to open-source projects. ## What we learned Throughout this project, we learned: 1. How to work with LLMs and external data connectors. 2. The intricacies of building a Chrome extension, and how developers have very little freedom when developing browser extensions. 3. The importance of collaboration, effective communication, and making sure everyone is on the same page within our team, especially when merging critically related modules. ## What's next for GithubGuide We envision the following improvements and features for GithubGuide: 1. Expanding support for other browsers and platforms. 2. Enhancing the accuracy and quality of the explanations provided by ChatGPT. 3. Speeding up the pipeline. 4. Collaborating with the open-source community to further refine and expand the project.
winning
## Inspiration In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens. ## What it does Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient. ## How we built it On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore, For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values. To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C. While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API. ## Challenges we ran into As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file. Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel. ## Accomplishments that we're proud of We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning. ## What we learned The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data. ## What's next for Rescue Ranger Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives. The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment. * Satellite Mapping -> This would be more accurate than GPS. * LIDAR Sensors -> Can create a 3D render of the area where the person was found. * Heat Sensors -> We could detect people stuck under debris. * Better Cameras -> Would enhance our usage of computer vision technology. * Drones -> Would navigate debris more efficiently than rovers.
## Inspiration When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless. ## What it does * Touchless is an accessible and contact-free solution for gathering form information. * Allows users to interact with forms using voices and touchless gestures. * Users use different gestures to answer different questions. * Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no. * Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated. * Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices. ## How we built it * Gesture and voice components are written in Python. * The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols. * SpeechRecognition recognizes user speech * The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises. * We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database. ## Challenges we ran into * Tried to set up a Cerner API for FHIR data, but had difficulty setting it up. * As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data. ## Accomplishments we’re proud of This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective. ## What we learned We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects. ## What’s next for Touchless In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components.
## Inspiration The cryptocurrency market is an industry which is expanding at an exponential rate. Everyday, thousands new investors of all kinds are getting into this volatile market. With more than 1,500 coins to choose from, it is extremely difficult to choose the wisest investment for those new investors. Our goal is to make it easier for those new investors to select the pearl amongst the sea of cryptocurrency. ## What it does To directly tackle the challenge of selecting which cryptocurrency to choose, our website has a compare function which can add up to 4 different cryptos. All of the information from the chosen cryptocurrencies are pertinent and displayed in a organized way. We also have a news features for the investors to follow the trendiest news concerning their precious investments. Finally, we have an awesome bot which will answer any questions the user has about cryptocurrency. Our website is simple and elegant to provide a hassle-free user experience. ## How we built it We started by building a design prototype of our website using Figma. As a result, we had a good idea of our design pattern and Figma provided us some CSS code from the prototype. Our front-end is built with React.js and our back-end with node.js. We used Firebase to host our website. We fetched datas of cryptocurrency from multiple APIs from: CoinMarketCap.com, CryptoCompare.com and NewsApi.org using Axios. Our website is composed of three components: the coin comparison tool, the news feed page, and the chatbot. ## Challenges we ran into Throughout the hackathon, we ran into many challenges. First, since we had a huge amount of data at our disposal, we had to manipulate them very efficiently to keep a high performant and fast website. Then, there was many bugs we had to solve when integrating Cisco's widget to our code. ## Accomplishments that we're proud of We are proud that we built a web app with three fully functional features. We worked well as a team and had fun while coding. ## What we learned We learned to use many new api's including Cisco spark and Nuance nina. Furthermore, to always keep a backup plan when api's are not working in our favor. The distribution of the work was good, overall great team experience. ## What's next for AwsomeHack * New stats for the crypto compare tools such as the number of twitter, reddit followers. Keeping track of the GitHub commits to provide a level of development activity. * Sign in, register, portfolio and watchlist . * Support for desktop applications (Mac/Windows) with electronjs
winning
## Inspiration The inspiration behind LeafHack stems from a shared passion for sustainability and a desire to empower individuals to take control of their food sources. Witnessing the rising grocery costs and the environmental impact of conventional agriculture, we were motivated to create a solution that not only addresses these issues but also lowers the barriers to home gardening, making it accessible to everyone. ## What it does Our team introduces "LeafHack" an application that leverages computer vision to detect the health of vegetables and plants. The application provides real-time feedback on plant health, allowing homeowners to intervene promptly and nurture a thriving garden. Additionally, the images uploaded can be stored within a database custom to the user. Beyond disease detection, LeafHack is designed to be a user-friendly companion, offering personalized tips and fostering a community of like-minded individuals passionate about sustainable living ## How we built it LeafHack was built using a combination of cutting-edge technologies. The core of our solution lies in the custom computer vision algorithm, ResNet9, that analyzes images of plants to identify diseases accurately. We utilized machine learning to train the model on an extensive dataset of plant diseases, ensuring robust and reliable detection. The database and backend were built using Django and Sqlite. The user interface was developed with a focus on simplicity and accessibility, utilizing next.js, making it easy for users with varying levels of gardening expertise ## Challenges we ran into We encountered several challenges that tested our skills and determination. Fine-tuning the machine learning model to achieve high accuracy in disease detection posed a significant hurdle as there was a huge time constraint. Additionally, integrating the backend and front end required careful consideration. The image upload was a major hurdle as there were multiple issues with downloading and opening the image to predict with. Overcoming these challenges involved collaboration, creative problem-solving, and continuous iteration to refine our solution. ## Accomplishments that we're proud of We are proud to have created a solution that not only addresses the immediate concerns of rising grocery costs and environmental impact but also significantly reduces the barriers to home gardening. Achieving a high level of accuracy in disease detection, creating an intuitive user interface, and fostering a sense of community around sustainable living are accomplishments that resonate deeply with our mission. ## What we learned Throughout the development of LeafHack, we learned the importance of interdisciplinary collaboration. Bringing together our skills, we learned and expanded our knowledge in computer vision, machine learning, and user experience design to create a holistic solution. We also gained insights into the challenges individuals face when starting their gardens, shaping our approach towards inclusivity and education in the gardening process. ## What's next for LeafHack We plan to expand LeafHack's capabilities by incorporating more plant species and diseases into our database. Collaborating with agricultural experts and organizations, we aim to enhance the application's recommendations for personalized gardening care.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration Environmental consciousness and desire to increase urgency to act. ## What it does Allows users to create their own plans for countries to "save the world" by changing and reducing their energy consumption. ## How we built it Using Google Data Studio and Google Cloud to display extensively researched data, plus an embedded ground-up website. ## Challenges we ran into Navigating Data Studio API; collecting and synthesizing large amounts of data, typical web development issues. ## Accomplishments that we're proud of Building an interactive site, finding good sources, analyzing data, creating our own website. ## What we learned We designed, and then implemented, and we learned how much time that saved. We also learned more about how long CSS takes! Also learned to plan around API constraints. ## What's next for Save the World Create a manual and an action plan for users before and after using they app. Ask them: How did this raise your consciousness of the state of our world?
winning
## Inspiration Two of our teammates have personal experiences with wildfires: one who has lived all her life in California, and one who was exposed to a fire in his uncle's backyard in the same state. We found the recent wildfires especially troubling and thus decided to focus our efforts on doing what we could with technology. ## What it does CacheTheHeat uses different computer vision algorithms to classify fires from cameras/videos, in particular, those mounted on households for surveillance purposes. It calculates the relative size and rate-of-growth of the fire in order to alert nearby residents if said wildfire may potentially pose a threat. It hosts a database with multiple video sources in order for warnings to be far-reaching and effective. ## How we built it This software detects the sizes of possible wildfires and the rate at which those fires are growing using Computer Vision/OpenCV. The web-application gives a pre-emptive warning (phone alerts) to nearby individuals using Twilio. It has a MongoDB Stitch database of both surveillance-type videos (as in campgrounds, drones, etc.) and neighborhood cameras that can be continually added to, depending on which neighbors/individuals sign the agreement form using DocuSign. We hope this will help creatively deal with wildfires possibly in the future. ## Challenges we ran into Among the difficulties we faced, we had the most trouble with understanding the applications of multiple relevant DocuSign solutions for use within our project as per our individual specifications. For example, our team wasn't sure how we could use something like the text tab to enhance our features within our client's agreement. One other thing we were not fond of was that DocuSign logged us out of the sandbox every few minutes, which was sometimes a pain. Moreover, the development environment sometimes seemed a bit cluttered at a glance, which we discouraged the use of their API. There was a bug in Google Chrome where Authorize.Net (DocuSign's affiliate) could not process payments due to browser-specific misbehavior. This was brought to the attention of DocuSign staff. One more thing that was also unfortunate was that DocuSign's GitHub examples included certain required fields for initializing, however, the description of these fields would be differ between code examples and documentation. For example, "ACCOUNT\_ID" might be a synonym for "USERNAME" (not exactly, but same idea). ## Why we love DocuSign Apart from the fact that the mentorship team was amazing and super-helpful, our team noted a few things about their API. Helpful documentation existed on GitHub with up-to-date code examples clearly outlining the dependencies required as well as offering helpful comments. Most importantly, DocuSign contains everything from A-Z for all enterprise signature/contractual document processing needs. We hope to continue hacking with DocuSign in the future. ## Accomplishments that we're proud of We are very happy to have experimented with the power of enterprise solutions in making a difference while hacking for resilience. Wildfires, among the most devastating of natural disasters in the US, have had a huge impact on residents of states such as California. Our team has been working hard to leverage existing residential video footage systems for high-risk wildfire neighborhoods. ## What we learned Our team members learned concepts of various technical and fundamental utility. To list a few such concepts, we include MongoDB, Flask, Django, OpenCV, DocuSign, Fire safety. ## What's next for CacheTheHeat.com Cache the Heat is excited to commercialize this solution with the support of Wharton Risk Center if possible.
## Inspiration Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them. ## What it does Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings. ## How we built it The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user. ## Challenges we ran into One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly. We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation. ## Accomplishments that we're proud of We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user. ## What we learned We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols. ## What's next for notethisboard Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input.
## Inspiration I wanted to create a game that had a unique premise and controls. ## What it does A puzzle game that has the player control four characters at once. ## How I built it I used Python along with Pygame to complete this project. ## Challenges I ran into At first I attempted to use Unity to complete this project, however I was not proficient enough in it to implement the complicated logic I had planned for the game. ## Accomplishments that I'm proud of Designing the levels, creating a unique concept for a game. ## What I learned Re-learned the intricacies of Python and Pygame. ## What's next for Splitter More levels and features to challenge the player.
winning
## Why VBN? There are primarily two groups who VBN benefits: 1. Buyers - before purchasing a vehicle you can be confident in your understanding of its current state. 2. Owners - have access to all of **your** data without having to pay a third-party service. In general, benefits include: **Avoid Fees** As opposed to competing reporting products, VBN will provide all necessary details at-a-glance, for free and at anytime. **Track Ownership Changes** The MTO/DMV will be able to generate the encrypted VIN, and then use that to access the records and signify that the ownership has changed during the registration process. **Track Accident Reports** Police/collision centres will be able to generate the encrypted VIN, and then use that to access the records and add information about the crash. **Monitor Suspicious Activity** When police are notified that the car was stolen, they can generate the encrypted VIN and add a warning message to the records. Furthermore, if your VIN has been used in connection with a scam, you would be able to flag suspicion. **Track Repairs/Services** When the vehicle is serviced, the mechanic shop can update the car's records with details of what was looked at. This is necessary in understanding if the changes were cosmetic, or linked with damage. Odometer readings will also be reported. ## What it does Taking advantage of the immutable nature of blockchain, the Vehicle Blockchain Network (VBN) provides a vehicle's history through records which include dates and descriptions of manufacturing, insurance claims, ownership changes, maintenance, etc. ## Blockchain Integration **Wallet Usage** The purpose of integrating Blockchain into our concept is to emphasize the importance of an immutable ledger. This public ledger contains information that is permanently attached to a user key, which in this concept is an hashed version of a vehicle's VIN. When a user inputs an encrypted VIN into the interface, the ledger and transaction history is populated into the UI. We were able to achieve this functionality through Hedera, where we distributed accounts with special privileges that allowed the writing of information to a VIN's ledger. Each transaction that is made contains encrypted information about the type of report and relevant details. **Next-Steps** While restricted to a specific stream of Blockchain, our options to expand were limited. With further development and research we will be looking to moving our system to \_ Smart Contracts \_ or \_ File transfer \_ services. ### User Flow **Regular User** If a user does not log in, they are able to view the records for any vehicle through an encryption of its VIN. Vehicle owners will possess a QR code which can be scanned to make the process of inputting the string easier. **Authorized User** If a user logs in with an authorized email, then they have the ability to add new records based on their role. For example, an MTO employee will be able to create a record signifying a change in ownership, and an insurance agent will be able to create a record representing an insurance claim. ## How We Built It VBN is a built on a Django backend, with a frontend using Bootstrap components and styling. We used Hedera to handle the transactions which are used for creating records. ## Challenges We Ran Into * Learning about blockchain + implementing a solution in a weekend * Digital Ocean outage on 2AM on Sunday * Not reading documentation and implementing functions that already existed * 1/4 laptops used bricked in the process * 1/4 people using a Queen's computer because their laptop died was booted off for a scheduled windows update at 3AM ## What We Learned * Blockchain, various use cases outside of crypto, and how to implement * Perseverance
## Inspiration Our inspiration comes from the Telemarketing survey. We want to create a sort of "prank call" to people, especially to our friends where the call will be a super-realistic voice presenting a survey to them. At the end, we have decided to program a chatbot that will conduct a survey via phone and ask them how they feel about AI. ## What it does Our project is a chatbot that will conduct a survey on the population of London, Ontario about their own thoughts and believes towards Artificial Intelligence. The chatbot will present a series of multiple choices questions as well as open-ended questions about the perception and knowledge of AI. The answers will be recorded and analyzed before being sent to our website, where the data will be presented. The purpose is to give a score on how well our target population knows about AI, and survive an AI apocalypse. ## How we built it We built it using Dasha AI. ## Challenges we ran into The challenges we run to is that the application(AI) hungs up when there is a longer delay between the question and the response of the user. The second challenge is that the AI skip the last questions and automatically exit and hungs up during the first test of our application. ## Accomplishments that we're proud of This is the first Hackathon that most of our members have participated in. Therefore, being able to challenge ourself and to build a complex project in a span of 36 hours is the greatest achievement that we have accomplished. ## What we learned * The basic of Dasha ai and how to use it to develop a software. ## - Fostered our skills in web design. ## What's next for Boom or Doom : The Future of AI **Target a larger population** ## If you want to try it out for yourself: Clone the github repo and download NodeJS and Dasha! <https://dasha.ai/en-us> More instructions on setting up Dasha available here.
## Inspiration Our goal was to implement a social feature that would attract students to Radish's services. We were inspired by McGill's Engineering building ice cream store. They give out free ice cream when you fail an exam. ## What it does It asks for the person to submit proof that they failed, checks it, and gives out a discount code for any restaurant affiliated with Radish. ## How we built it Front-end: React Backend: Python ## Challenges we ran into Animations, connecting front-end and backend to make the feature functional. ## Accomplishments that we're proud of Used React for the first time, and general resilience. ## What we learned Image processing, React, and how to search for resources online efficiently. ## What's next for Radishes & Failures Connecting front-end with back-end, connecting to Radish's own platform, bringing comfort in failure :)
partial
## Inspiration Grip strength has been shown to be a powerful biomarker for numerous physiological processes. Two particularly compelling examples are Central Nervous System (CNS) fatigue and overall propensity for Cardiovascular Disease (CVD). The core idea is not about building a hand grip strengthening tool, as this need is already largely satisfied within the market by traditional hand grip devices currently. Rather, it is about building a product that leverages the insights behind one’s hand grip to help users make more informed decisions about their physical activities and overall well-being. ## What it does Gripp is a physical device that users can squeeze to measure their hand grip strength in a low-cost, easy-to-use manner. The resulting measurements can be benchmarked against previous values taken by oneself, as well as comparable peers. These will be used to provide intelligent recommendations on optimal fitness/training protocols through providing deeper, quantifiable insights into recovery. ## How we built it Gripp was built using a mixture of both hardware and software. On the hardware front, the project began with a Computer-Aided Design (CAD) model of the device. With the requirement to build around the required force sensors and accompanying electronics, the resulting model was customized exclusively for this product, and subsequently, 3-D printed. Other considerations included the ergonomics of holding the device, and adaptability depending on the hand size of the user. Exerting force on the Wheatstone bridge sensor causes it to measure the voltage difference caused by minute changes to resistance. These changes in resistance are amplified by the HX711 amplifier and converted using an ESP32 into a force measurement. From there, the data flows into a MySQL database hosted in Apache for the corresponding user, before finally going to the front-end interface dashboard. ## Challenges we ran into There were several challenges that we ran into. On the hardware side, getting the hardware to consistently output a force value was challenging. Further, listening in on the COM port, interpreting the serial data flowing in from the ESP-32, and getting it to interact with Python (where it needed to be to flow through the Flask endpoint to the front end) was challenging. On the software side, our team was challenged by the complexities of the operations required, most notably the front-end components, with minimal experience in React across the board. ## Accomplishments that we're proud of Connecting the hardware to the back-end database to the front-end display, and facilitating communication both ways, is what we are most proud of, as it required navigating several complex issues to reach a sound connection. ## What we learned The value of having another pair of eyes on code rather than trying to individually solve everything. While the latter is often possible, it is a far less efficient (especially when around others) methodology. ## What's next for Gripp Next for Gripp on the hardware side is continuing to test other prototypes of the hardware design, as well as materials (e.g., a silicon mould as opposed to plastic). Additionally, facilitating the hardware/software connection via Bluetooth. From a user-interface perspective, it would be optimal to move from a web-based application to a mobile one. On the front-end side, continuing to build out other pages will be critical (trends, community), as well as additional features (e.g., readiness score).
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
# AuthDaddy: 2FA Using Biometric Data [![GitHub Repository](https://img.shields.io/badge/GitHub-Explore%20the%20Code-blue?logo=github)](https://github.com/Mahajanet/AuthDaddy) AuthDaddy is a groundbreaking Two-Factor Authentication (2FA) solution developed during PennApps XXIV (2023). Unlike traditional 2FA systems that rely on multiple electronic devices and complex network infrastructure, AuthDaddy simplifies the authentication process by harnessing a user's unique biometric typing patterns. This innovative approach not only enhances security but also addresses critical environmental concerns associated with conventional 2FA methods. ## Table of Contents Click to Expand 1. [Team](#team) 2. [Abstract](#abstract) 3. [How It Works](#how-it-works) 4. [Technologies Used](#technologies-used) 5. [Advantages](#advantages) 6. [Acknowledgments](#acknowledgments) ## Team Meet the talented undergraduate students from Rice University and the University of Pennsylvania who brought AuthDaddy to life: * **Michael Khalfin** + School: Rice University + Email: [mlk15@rice.edu](mailto:mlk15@rice.edu) + LinkedIn: [Profile](https://www.linkedin.com/in/michael-khalfin-87551b20b/) * **Marko Tanevski** + School: Rice University + Email: [mt102@rice.edu](mailto:mt102@rice.edu) + LinkedIn: [Profile](https://www.linkedin.com/in/marko-tanevski/) * **Jahnavi Mahajan** + School: Rice University + Email: [jm139@rice.edu](mailto:jm139@rice.edu) + LinkedIn: [Profile](https://www.linkedin.com/in/jahnavi-mahajan-b97892251/) * **Leo Lungu** + School: University of Pennsylvania + Email: [leolungu@sas.upenn.edu](mailto:leolungu@sas.upenn.edu) + LinkedIn: [Profile](https://www.linkedin.com/in/leonardlungu/) ## Abstract AuthDaddy revolutionizes the concept of 2FA by eliminating the need for multiple server requests, additional electronic devices, and complex infrastructure. Traditional 2FA methods often result in excessive energy consumption due to server GET/POST requests, waste user time and energy, and leaving a substantial carbon footprint. Moreover, they frequently involve physical tokens or smart cards, contributing to transportation costs and environmental waste. ### How It Works AuthDaddy's innovative approach to 2FA relies on the following key principles: 1. **Biometric Typing Patterns**: AuthDaddy generates a highly secure and custom biometric profile based on a user's unique typing patterns. This profile serves as a reliable identifier. 2. **Stats-Based API**: The system provides a statistics-based API that seamlessly integrates with various web platforms. This API enables other web services to verify a user's identity without the need for traditional 2FA methods. ### Technologies Used AuthDaddy was developed using a powerful combination of technologies, including: * **MATLAB**: MATLAB was instrumental in processing and analyzing the biometric typing patterns, ensuring the accuracy and security of user identification. * **JavaScript (JS)**: JavaScript played a crucial role in developing the interactive web platform, allowing users to seamlessly interact with AuthDaddy. * **HTML and CSS**: HTML and CSS were used to create the user interface, providing an intuitive and visually appealing experience. These technologies were vital in creating a robust and user-friendly 2FA solution. ### Advantages AuthDaddy offers numerous advantages: * **Sustainability**: By reducing the reliance on additional hardware and minimizing server requests, AuthDaddy significantly decreases energy consumption and carbon emissions, making it an eco-friendly solution. * **User-Friendly**: Users are not burdened with the need for external devices or mobile apps, enhancing the user experience: saving time + energy. * **Security**: Biometric typing patterns offer a robust and personalized authentication method, enhancing security. ## Acknowledgments We extend our heartfelt gratitude to the entire PennApps team and our generous sponsors for their unwavering support and encouragement during this exciting journey. [Explore the Code](https://github.com/Mahajanet/AuthDaddy)
winning
## Inspiration We were going to build a themed application to time portal you back to various points in the internet's history that we loved, but we found out prototyping with retro looking components is tough. Building each component takes a long time, and even longer to code. We started by automating parts of this process, kept going, and ended up focusing all our efforts on automating component construction from simple Figma prototypes. ## What it does Give the plugin a Figma frame that has a component roughly sketched out in it. Our code will parse the frame and output JSX that matches the input frame. We use semantic detection with Cohere classify on the button labels combined with deterministic algorithms on the width, height, etc. to determine whether a box is a button, input field, etc. It's like magic! Try it! ## How we built it Under the hood, the plugin is a transpiler for high level Figma designs. Similar to a C compiler compiling C code to binary, our plugin uses an abstract syntax tree like approach to parse Figma designs into html code. Figma stores all it's components (buttons, text, frames, input fields, etc) in nodes.. Nodes store properties about the component or type of element, such as height, width, absolute positions, fills, and also it's children nodes, other components that live within the parent component. Consequently, these nodes form a tree. Our algorithm starts at the root node (root of the tree), and traverses downwards. Pushing-up the generated html from the leaf nodes to the root. The base case is if the component was 'basic', one that can be represented with two or less html tags. These are our leaf nodes. Examples include buttons, body texts, headings, and input fields. To recognize whether a node was a basic component, we leveraged the power of LLM. We parsed the information stored in node given to us by Figma into English sentences, then used it to train/fine tune our classification model provided by co:here. We decided to use an ML to do this since it is more flexible to unique and new designs. For example, we were easily able to create 8 different designs of a destructive button, and it would be time-consuming relative to the length of this hackathon to come up with a deterministic algorithm. We also opted to parse the information into English sentences instead of just feeding the model raw figma node information since the LLM would have a hard time understanding data that didn't resemble a human language. At each node level in the tree, we grouped the children nodes based on a visual hierarchy. Humans do this all the time, if things are closer together, they're probably related, and we naturally group them. We achieved a similar effect by calculating the spacing between each component, then greedily grouped them based on spacing size. Components with spacings that were within a tolerance percentage of each other were grouped under one html . We also determined the alignments (cross-axis, main-axis), of these grouped children to handle designs with different combinations of orientations. Finally, the function is recursed on their children, and their converted code is pushed back up to the parent to be composited, until the root contains the code for the design. Our recursive algorithm made it so our plugin was flexible to the countless designs possible in Figma. ## Challenges we ran into We ran into three main challenges. One was calculating the spacing. Since while it was easy to just apply an algorithm to merge two components at a time (similar to mergesort), it would produce too many nested divs, and wouldn't really be useful for developers to use the created component. So we came up with our greedy algorithm. However, due to our perhaps mistaken focus on efficiency, we decided to implement a more difficult O(n) algorithm to determine spacing, where n is the number of children. This sapped a lot of time away, which could have been used for other tasks and supporting more elements. The second main challenge was with ML. We were actually using Cohere Classify wrongly, not taking semantics into account and trying to feed it raw numerical data. We eventually settled on using ML for what it was good at - semantic analysis of the label, while using deterministic algorithms to take other factors into account. Huge thanks to the Cohere team for helping us during the hackathon! Especially Sylvie - you were super helpful! We also ran into issues with theming on our demo website. To show how extensible and flexible theming could be on our components, we offered three themes - windows XP, 7, and a modern web layout. We were originally only planning to write out the code for windows XP, but extending the component systems to take themes into account was a refactor that took quite a while, and detracted from our plugin algorithm refinement. ## Accomplishments that we're proud of We honestly didn't think this would work as well as it does. We've never built a compiler before, and from learning off blog posts about parsing abstract syntax trees to implementing and debugging highly asychronous tree algorithms, I'm proud of us for learning so much and building something that is genuinely useful for us on a daily basis. ## What we learned Leetcode tree problems actually are useful, huh. ## What's next for wayback More elements! We can only currently detect buttons, text form inputs, text elments, and pictures. We want to support forms too, and automatically insert the controlling componengs (eg. useState) where necessary.
## What it does "ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points. ## How We built it Our project is comprised of many interconnected components, which we detail below: #### Formatting Engine To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required. #### Voice-to-speech We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed. #### Topic Analysis Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification. #### Image Scraping Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen. #### Graph Generation Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time. #### Sentence Segmentation When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline. #### Text Title-ification Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title. #### Text Summarization When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous. #### Mobile Clicker Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets. #### Internal Socket Communication In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides. ## Challenges We ran into * Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis. * The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop. ## Accomplishments that we're proud of * Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. * Working on an unsolved machine learning problem (sentence simplification) * Connecting a mobile device to the laptop browser’s mic using WebSockets * Real-time text analysis to determine new elements ## What's next for ImpromPPTX * Predict what the user intends to say next * Scraping Primary sources to automatically add citations and definitions. * Improving text summarization with word reordering and synonym analysis.
## **Inspiration:** Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn. ## **What it does:** Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations. ## **How we built it**: We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders. ## **Challenges we ran into**: Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning. ## **Accomplishments that we're proud of**: Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly. ## **What we learned**: We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
winning
# yhack JuxtaFeeling is a Flask web application that visualizes the varying emotions between two different people having a conversation through our interactive graphs and probability data. By using the Vokaturi, IBM Watson, and Indicoio APIs, we were able to analyze both written text and audio clips to detect the emotions of two speakers in real-time. Acceptable file formats are .txt and .wav. Note: To differentiate between different speakers in written form, please include two new lines between different speakers in the .txt file. Here is a quick rundown of JuxtaFeeling through our slideshow: <https://docs.google.com/presentation/d/1O_7CY1buPsd4_-QvMMSnkMQa9cbhAgCDZ8kVNx8aKWs/edit?usp=sharing>
## Overview We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses. ## Inspiration Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out! ## What it does SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with. ## How we built it The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame. ## Challenges we ran into Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour. ## Accomplishments that we're proud of We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency. ## What we learned We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees) ## What's next for SmartEQ We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions. In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
## Inspiration Each year in China alone, 20,000 to 200,000 children went missing, kidnapped or trafficked. They are usually forced to labor, auctioned illegally for adoption, or even entrapped in gangs. People witness them on the street, but the reporting and identification process is frustratingly time-consuming. Convenient and timely identification check is essential to further actions. We aspire to seek solutions in the light of face recognition technology. ## What it does Our web-based app allows users to match their cloud photo repositories with a missing person database. Upon confident matches, the app will prompt identifications to the user for further investigation. In detail, the app does: * Get photo access autherization from repositories (Google photos for the submission) * Identify distinct faces in each photo * Match each face with identities in the databases * Return photo pairs of 2 similar faces (and mark them) * Render paired results on the website and allow users to decide if we have a match ## How we built it The app was written in Node.js with Express framework. Face recognition is achieved with Microsoft Face API for face detection and similarity matching. We feed source user photos from Google Picasa API. Since there is yet a publicly available missing person database, we created a sample MySQL database on AWS. The Front-End is written in HTML, CSS and javascript with Bootstrap. ## Challenges we ran into * Comparing different Face API and work around their conditions, which has been a lot * Node.js - a new web tool for the entire team. ## Accomplishments that we're proud of We are proud to have come up with this powerful idea and made a preliminary product within such short time with new tools. ## What we learned All components of a web application on node; in particular we learned to write async programs. App development in team. ## What's next for Find Missing People * More repositories: Facebook, iCloud, or video searching/ cameras? * Not as an app but as an extension? In fact, we see the app to be 'hidden' behind the scene and only message the user or relevant authorities when a match happens. It might a recursive program on the photo storage platforms. * Better people database. The application is not limited to missing people, but also criminals at large. * Distributed systems and map-reduce for faster calculations
winning
## Inspiration In this program we aim to simulate different models of opinion formation within an interacting population. ## What it does We consider each individual, or agent, in the population to have some opinion between 0 and 1 which they update based on the opinions of all other agents in the population. To normalize this updating each agent has a weight associated with all other agents such that agent i's ## How we built it We mainly used python to visualize the data, using different libraries to graph multiple complex plots in a simple and visually appealing manner. ## Challenges we ran into Understanding where to visualize data using different python libraries. ## Accomplishments that we're proud of Understanding complex mathematical models and knowing how to display them in a visually appealing manner. Implementing the limitations and being aware of its parameters and how it affects the data. ## What we learned A lot of numerical analysis and its limitations. ## What's next for Data Visualization of Opinion Polarization Implementing an improved user interface in order to make it more user-friendly. Use other models so that we can easily generate simulations that are easy to compare with each other.
## Inspiration For Canada 2023 had statistically the most destructive wildfire season with 15 million hectares of land having been burnt. Not only Canada, around the world forest fires have been a rising issue causing countless damages to not only homes and people in high fire risk areas but also is taking a massive dent in the world's forests. Current methods of wildfire detection are insufficient for fast response to remote areas. Around the world, forest fire detection relies on civilian detection and reporting. The downfall of this ends up being uninhabited areas around the world lack the necessary monitoring to detect and put out forest fires before they reach a less-manageable state. Part of sustainability is to reduce or even stop the harm done to the environment to maintain a livable state for not only us but all other animals on earth. Project-Firefly aims to tackle the problem of easy, early, and most importantly accessible detection of forest fires in remote areas. Current common forest fire detection methods include: Civilian reporting, Lookout stations, ground and air patrols, and more recently have started included implementation of Computer database predictions and custom fire detection sensors placed along forests. These are all great strategies however when it comes to remote areas, detection by humans isn’t a common occurrence during early stages of a fire, computer prediction is only a stepping stone and sensor implementation can be costly and tedious to set up. ## What it does Project firefly aims to be a solution to early forest fire detection by combining all common forest fire detection methods into one easy platform. Not only is it a resource for civilians to report forest fires through image and location submission, but it is also an interface for drone detection systems. All this data becomes useful to help authorities understand the spread and location of fires but also provides an early warning system for civilians within approaching wildfire zones. Firefly drone detection utilises basic camera integrated drones to identify and report fires not only solving the issue of needing constant personale patrol over large remote areas, but also lets civilians be able to easier and more impactfully contribute to fire detection to help the areas around them. Firefly drones capabilities can be done with any camera integrated drone able to stream in RTMP allowing connection to our custom servers and fire detection models. ## Challenges we ran into Since Project Firefly was a product produced within Hack The Valley 9 within the limited time frame, we weren’t able to implement all the features we would have liked but were able to produce working CV models onto a DJI mini drone allowing it to detect fire from a source as small as a lighter from 20m away. ## What's next for FIREFL.AI Ideally we want to further the reliability of fire detection through the addition of temperature sensors to help verify the existence of a forest fire and a gps module to be able to output more in depth data regarding active fires. Unfortunately we didn’t have access to a gps module, however, we were able to create a proof of concept using a temperature module, rotary encoder, and ESP32 allowing us to make custom “gps” inputs and read and react to temperature data. Depending on the increase in average temperature over a period of time it would be able to be registered as a fire and outline the “exact coordinates” of where that was detected. The idea was to be able to cross reference this with the camera footage to make the fire detection more reliable and data more useful. In terms of the user interface, we wanted to further expand the capabilities and usefulness of the web app, not only as a medium to report fires and find resources, but also based on current reported fires from user submissions and drone detection be able to calculate approximately how fast and where the fire would spread to given time so that the web app could also serve as an alert system to warn users giving them more time to prepare for disaster. In the end we want Project Firefly to be able to fulfil the potential of helping sustain the forest population of areas at risk for wildfires. Ideally this would let forest fires be detected within early stages reducing the harm done to the area. ## How we built it Here’s a more detailed version, expanding on the fire detection system: The front end of the fire detection system was developed using Next.js, offering a responsive and intuitive user experience. This interface allows users to monitor real-time footage from the drone and view relevant data on detected fires. For the core fire detection functionality, we implemented a custom YOLOv8 model. The model was fine-tuned extensively, with additional training on a specialised dataset to increase its precision in identifying fires in various environments, including dense forests and open fields. On the backend, we utilised Flask, which was responsible for managing the data flow between the drone and the detection model. The drone continuously captures video footage and sensor data, such as temperature and humidity, which is stored in the Flask-based system. This data is processed by the YOLOv8 model hosted on the backend, allowing it to analyse the incoming footage in real-time. Once a fire is detected, the system flags the footage, records critical information like the time and location of the fire, and sends alerts to the web app. The integration between the frontend and backend ensures that users are promptly notified of any potential fire hazards, while the backend efficiently handles large volumes of data from the drone, making the system scalable and robust. This project demonstrates the use of machine learning and drone technology to create an automated fire detection solution aimed at preventing large-scale forest fires. ## Accomplishments that we're proud of Here’s an expanded and edited version: Some of our most significant accomplishments throughout this project include successfully implementing the computer vision system using a YOLOv8 model built with PyTorch and developing a fully functional, user-friendly front-end interface. One of the key technical achievements was fine-tuning the YOLO model to work seamlessly with the drone's hardware, enabling real-time fire detection with high accuracy. Leveraging PyTorch for the model provided us with flexibility in training and optimising the model for better performance in diverse environments, such as dense forests or open landscapes. We also integrated the model with the drone's sensors through a robust backend powered by Flask, ensuring that the incoming data and video footage from the drone were processed and analyzed efficiently. The model’s ability to detect fires in real-time, paired with sensor data, was a crucial aspect of achieving high accuracy. On the front end, developed using Next.js, we created a smooth and intuitive user interface that allows users to monitor live footage, view sensor readings, and receive instant alerts when a fire is detected. This interface provides an accessible way for users to interact with the detection system and manage critical fire response measures. Overall, our biggest accomplishment is delivering a solution that supports sustainability. By automating fire detection and enabling early intervention, this system has the potential to prevent large-scale forest fires, protect ecosystems, and contribute to environmental conservation. ## What we learned By using OpenCV with a custom-trained YOLO algorithm for fire detection, we gained valuable experience and into real-time video processing and object recognition. We learned how to set up RTMP servers using Nginx to stream video feeds from DJI drones and process them with our fire detection model. This project deepened our understanding of various object recognition algorithms and their performance, particularly how YOLO utilises metrics like Intersection over Union (IoU) to calculate confidence scores. Additionally, we explored the practical applications of computer vision (CV), grasping how it interprets and processes visual data in real time, including challenges related to accuracy and optimization for dynamic environments. This experience provided us with hands-on knowledge of real-time data handling, model deployment, and the intricacies of fine-tuning computer vision systems for specific tasks such as fire detection.
## Inspiration For this hackathon, we wanted to build something that could have a positive impact on its users. We've all been to university ourselves, and we understood the toll, stress took on our minds. Demand for mental health services among youth across universities has increased dramatically in recent years. A Ryerson study of 15 universities across Canada show that all but one university increased their budget for mental health services. The average increase has been 35 per cent. A major survey of over 25,000 Ontario university students done by the American college health association found that there was a 50% increase in anxiety, a 47% increase in depression, and an 86 percent increase in substance abuse since 2009. This can be attributed to the increasingly competitive job market that doesn’t guarantee you a job if you have a degree, increasing student debt and housing costs, and a weakening Canadian middle class and economy. It can also be contributed to social media, where youth are becoming increasingly digitally connected to environments like instagram. People on instagram only share the best, the funniest, and most charming aspects of their lives, while leaving the boring beige stuff like the daily grind out of it. This indirectly perpetuates the false narrative that everything you experience in life should be easy, when in fact, life has its ups and downs. ## What it does One good way of dealing with overwhelming emotion is to express yourself. Journaling is an often overlooked but very helpful tool because it can help you manage your anxiety by helping you prioritize your problems, fears, and concerns. It can also help you recognize those triggers and learn better ways to control them. This brings us to our application, which firstly lets users privately journal online. We implemented the IBM watson API to automatically analyze the journal entries. Users can receive automated tonal and personality data which can depict if they’re feeling depressed or anxious. It is also key to note that medical practitioners only have access to the results, and not the journal entries themselves. This is powerful because it takes away a common anxiety felt by patients, who are reluctant to take the first step in healing themselves because they may not feel comfortable sharing personal and intimate details up front. MyndJournal allows users to log on to our site and express themselves freely, exactly like they were writing a journal. The difference being, every entry in a persons journal is sent to IBM Watson's natural language processing tone analyzing API's, which generates a data driven image of the persons mindset. The results of the API are then rendered into a chart to be displayed to medical practitioners. This way, all the users personal details/secrets remain completely confidential and can provide enough data to counsellors to allow them to take action if needed. ## How we built it On back end, all user information is stored in a PostgreSQL users table. Additionally all journal entry information is stored in a results table. This aggregate data can later be used to detect trends in university lifecycles. An EJS viewing template engine is used to render the front end. After user authentication, a journal entry prompt when submitted is sent to the back end to be fed asynchronously into all IBM water language processing API's. The results from which are then stored in the results table, with associated with a user\_id, (one to many relationship). Data is pulled from the database to be serialized and displayed intuitively on the front end. All data is persisted. ## Challenges we ran into Rendering the data into a chart that was both visually appealing and provided clear insights. Storing all API results in the database and creating join tables to pull data out. ## Accomplishments that we're proud of Building a entire web application within 24 hours. Data is persisted in the database! ## What we learned IBM Watson API's ChartJS Difference between the full tech stack and how everything works together ## What's next for MyndJournal A key feature we wanted to add for the web app for it to automatically book appointments with appropriate medical practitioners (like nutritionists or therapists) if the tonal and personality results returned negative. This would streamline the appointment making process and make it easier for people to have access and gain referrals. Another feature we would have liked to add was for universities to be able to access information into what courses or programs are causing the most problems for the most students so that policymakers, counsellors, and people in authoritative positions could make proper decisions and allocate resources accordingly. Funding please
losing
## Inspiration We find ourselves pulling up to weldon, looking for a seat for roughly 20 minutes, and then leaving in despair. We built this tool to help other students forget about this struggle in their studious endeavours. ## What it does Asks users to input times they are available throughout the day, then uses data from an API that tracks times when locations are the busiest, and recommends where to go based on results. ## How we built it HTML, CSS, JS for the site Python for the web scraper ## Challenges we ran into implementing pretty much everything, not much was functional but we are proud of our idea ## Accomplishments that we're proud of working together as a team and having fun! we got through like 5 episodes of rick and morty ## What we learned 1. have an idea before starting 2. know what tools you are going to use before hand and how you are going to implement your idea 3. software is always changing, we learned this from listening to the sponsors ## What's next for Western Schedule Optimizer (WSO) chapter 11
## Inspiration I hate getting out of bed, but once I get rolling, I want to finish everything I have to for the day in one fell swoop. I wanted to make a web app that finds the most compact way to go about your day so you don't have awkward breaks. ## What it does Users enter three types of tasks: **fixed** which are at a certain place and time (a lecture), **anytime** which are at a location but can be done whenever (going to the store), and **filler** which are tasks that can be done anywhere, anytime, like emails, readings or eating the lunch you brought to campus. ## How I built it This was the first time I have used javascript, so I created the page in just one js file. ## Challenges I ran into An important part of scheduling tasks is the travel time between locations. I wanted to include Google Maps' Distance Matrix API but I had trouble setting it up so I opted to have the user enter travel times between all locations instead. I would like to finish this application properly by using the API because right now a lot of work is presented to the user. ## Accomplishments that I'm proud of Getting used to javascript was fun and I'm glad I started on the basics. While the page doesn't do anything groundbreaking, I think the scheduling algorithm is a little tough to implement and I'm glad it works fairly well. ## What I learned Dynamically adding elements to a web page is easier than I thought and really helps out with user interaction. I should have spent more time creating objects because the different types of tasks were hard to handle together and I mixed them up a lot. ## What's next for DayPath * Print path / open in easy page to screenshot / calendar app integration?? * Adding Google Maps API for automatic distances * Sanitize inputs * Better CSS * Make use of start time field * Fix pathing errors (there are a few cases where you have to walk 15 minutes in 5, and so on) * See if it helps me!
## Inspiration We always that moment when you're with your friends, have the time, but don't know what to do! Well, SAJE will remedy that. ## What it does We are an easy-to-use website that will take your current location, interests, and generate a custom itinerary that will fill the time you have to kill. Based around the time interval you indicated, we will find events and other things for you to do in the local area - factoring in travel time. ## How we built it This webapp was built using a `MEN` stack. The frameworks used include: MongoDB, Express, and Node.js. Outside of the basic infrastructure, multiple APIs were used to generate content (specifically events) for users. These APIs were Amadeus, Yelp, and Google-Directions. ## Challenges we ran into Some challenges we ran into revolved around using APIs, reading documentation and getting acquainted to someone else's code. Merging the frontend and backend also proved to be tough as members had to find ways of integrating their individual components while ensuring all functionality was maintained. ## Accomplishments that we're proud of We are proud of a final product that we legitimately think we could use! ## What we learned We learned how to write recursive asynchronous fetch calls (trust me, after 16 straight hours of code, it's really exciting)! Outside of that we learned to use APIs effectively. ## What's next for SAJE Planning In the future we can expand to include more customizable parameters, better form styling, or querying more APIs to be a true event aggregator.
losing
## Inspiration We love travelling. Two of us are actually travelling across the country to come to CalHacks. We were also inspired by reddit posts on /r/Berkeley describing some students feeling really lonely, despite being surrounded by people. In our generation it seems that a growing concern is the inability of people to meet new people. Despite that we see phenomena Ingress and PokemonGo that serendipitously bring people together. We wanted to capitalize on what problems we saw and what we enjoy doing by creating "Hello." ## What it does We developed a travel app that brings you and other travellers to various local, when you arrive at an area the app asks you to find each other, then as a group you play a series of minigames to learn more about each other, level up, and ultimately capture the location. Our goal was not to necessarily force people to become friends, but give them the *opportunity* to meet new people. ## How we built it We have a ReactNative client app for iOS and Android. We decided to use ReactNative for its ease of testing and cross platform availability. This connects to an expressJS backend that manages all the socket connections across the various client side applications, as well as generates the monster locations, and holds all user and game data. ## What's next for "Hello."? Goodbye?
## Inspiration We sat down and thought: okay, we will come back home from CalHacks. And what's the very next action? One of us will want to go play soccer, another teammate may want to go to the bar and talk about Machine Learning with someone. And we understood there are tons of wonderful and interesting people out there (sometimes even in the closest house!), who at a certain point in time want to do the same thing as you want or a complementary one. And today, unfortunately, there is no way we can easily connect with them and find the people we need to be with exactly at the time we need it. Just because there is a barrier, just because we do not share the "friends of friends" and so on. And doing something together can be a great opportunity to get to know each other better. By the way, 22% of millennials (people just like us) reported that they do not have true friends who know them well. We want to solve this problem. ## Vision Our platform is made for everyone regardless of any social criteria and it serves the sole purpose of making people happier by helping them spend their time in the best possible way, eliminating the feeling of loneliness. We believe, our platform can help get people out of their gadgets and bring more "real life" into our lives! We also think that many people are amazing and wonderful, but you did not get a chance to know them yet, and meeting these people any later than right now is a truly huge loss. ## What it does It lets a host organize an event and accept/decline sign-ups for it regardless of what this event is. All the data is synchronized in real-time made possible by leveraging the enormous power of Firebase triggers and listeners. The event can be anything starting from a study group for EECS class at Berkeley at 7 up to a suite of bass, piano, and drums players for a guitarist in 30 minutes at his house! This project is **global** and will make a huge positive impact on the life of, without exaggeration, every individual. We connect people based on geolocation and not only. We make people happier and increase the quality of their time and entertainment. ## How we built it We created a Node.js web server on Google Cloud App Engine, deployed it, connected it to a remote cloud CockroachDB cluster, where we store the history of user's searches (including the ones made with his voice, for which we used Google Cloud Speech-To-Text API). We stored events' and users' data in Firebase Real-Time Database. To make it sweet and simple, we used Expo to create the frontend (aka mobile app) and made expo app talk both to our App Engine server and Firebase serverless infrastructure. We hugely rely on the real-time functionality of Firebase. Think of huge chunks of data flying around here and there, empowering people to get the most of their time and to be happier. This is us, yeah :) ## Challenges we ran into 1. Connecting to the remote CockroachDB cluster from an App Engine. The connection string method does not work at all, so we spent some time figuring out we should use separate parameters method for this. 2. Firebase Real-time Database CRUD turned out to be more complicated than we were told it is and we expected it to be 3. Configuring Firebase Social Auth took a lot of time, because of permissions issues in Real-Time Database 4. Understanding React-Native mechanics was very challenging for all of us, but we enjoyed some of its advantages over native apps 5. There was a giant merging conflict happening at night on Saturday, that was very hard to resolve without the loss of someone's work, but we were able to manage with it 6. We were not really able to get much help on Expo and how it is different from just React Native at the beginning of the hackathon 7. Some environmental variables caused problems while working with Google Cloud's Speech-To-Text API and putting the data to CockrouchCloud cluster ## Accomplishments that we are proud of 1. We learned a lot about React-Native and Expo 2. We were able to find agreements and treated each other with respect throughout the event 3. We were able to identify the strongest parts of each teammate's skillset and delegate the tasks properly in order to save time and effort by focusing on the business logic, not technical details 4. We resolved the merging conflict that occurred 5. Finally, We made it! It's actually working! We learned so many APIs, learned cross-platform mobile, became good friends and just had a great time! ## What we learned 1. There are many ready-made solutions out there, and sometimes, if we do not find them, we can spend hours reinventing the wheel ==> a good search prior to the start is a really good practice and almost a necessity 2. Each technology has its advantages and disadvantages. It is always a trade-off 3. A decision, once made, sticks to the hackathon project until the end, since the new integrated components/libraries/frameworks have to be compatible with the existing ones 4. One person can do a good project. A team can do a life-changing product. 5. The ethics of technology is a huge question one should consider when using various tools. Always put people/users first. ## What's next for calhacks 1. Add Image recognition: let's say pointing your phone to guitar, and immediately showing you other musicians seeking for the guitarist right now close to you (as one application of this) 2. We can PROBABLY scale to the digital world too, for example, connecting gamers to play a certain game at a certain time, but this a little bit contradicts with our vision of bringing more "real" aspects to lives.. 3. We want to stick to a good performance given the huge stream of new users coming soon (seriously, this is for everyone).
## Why We Created **Here** As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence. Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule. Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle. Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections. ## What it does **Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient. ## How we built it UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality. Frontend: Our app is built on React Native and Expo. Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js. All team members contributed code! ## Challenges Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch. Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change. ## Accomplishments that we're proud of We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into. ## What we learned Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths). ## What's next for **Here** One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect. The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**!
losing
## Inspiration We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences. ## What it does LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study. ## How we built it We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis. After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user. We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference. ## Challenges we ran into * Getting the eye-tracker to be properly calibrated (it was always a bit off than our view) * Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!) ## Accomplishments that we're proud of * Hearing the first audio playback of a translation was exciting * Seeing the system work completely hands free while walking around the event venue was super cool! ## What we learned * we learned about how to work within the limitations of the eye tracker ## What's next for LingoVision One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary. Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly.
## Inspiration Learning a new instrument is hard. Inspired by games like Guitar Hero, we wanted to make a fun, interactive music experience but also have it translate to actually learning a new instrument. We chose the violin because most of our team members had never touched a violin prior to this hackathon. Learning the violin is also particularly difficult because there are no frets, such as those on a guitar, to help guide finger placement. ## What it does Fretless is a modular attachment that can be placed onto any instrument. Users can upload any MIDI file through our GUI. The file is converted to music numbers and sent to the Arduino, which then lights up LEDs at locations corresponding to where the user needs to press down on the string. ## How we built it Fretless is composed to software and hardware components. We used a python MIDI library to convert MIDI files into music numbers readable by the Arduino. Then, we wrote an Arduino script to match the music numbers to the corresponding light. Because we were limited by the space on the violin board, we could not put four rows of LEDs (one for each string). Thus, we implemented logic to color code the lights to indicate which string to press. ## Challenges we ran into One of the challenges we faced is that only one member on our team knew how to play the violin. Thus, the rest of the team was essentially learning how to play the violin and coding the functionalities and configuring the electronics of Fretless at the same time. Another challenge we ran into was the lack of hardware available. In particular, we weren’t able to check out as many LEDs as we needed. We also needed some components, like a female DC power adapter, that were not present at the hardware booth. And so, we had limited resources and had to make do with what we had. ## Accomplishments that we're proud of We’re really happy that we were able to create a working prototype together as a team. Some of the members on the team are also really proud of the fact that they are now able to play Ode to Joy on the violin! ## What we learned Do not crimp lights too hard. Things are always harder than they seem to be. Ode to Joy on the violin :) ## What's next for Fretless We can make the LEDs smaller and less intrusive on the violin, ideally a LED pad that covers the entire fingerboard. Also, we would like to expand the software to include more instruments, such as cello, bass, guitar, and pipa. Finally, we would like to corporate a PDF sheet music to MIDI file converter so that people can learn to play a wider range of songs.
## Inspiration Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel. ## What it does What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app. ## How we built it We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning). ## Challenges we ran into The challenges we ran into are the following: 1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field. 2) Deploying using Azure 3) Having to use Vanilla React Native installation ## Accomplishments that we're proud of We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term. ## What we learned We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist. ## What's next for eVital Our next steps are the following: 1) We want to be able to implement real-time updates for both doctors and patients. 2) We want to be able to integrate machine learning into the app for automated medical alerts. 3) Add more data visualization and data analytics. 4) Adding a functional log-in 5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc) 6) We want to put push notifications for patients' tasks for better monitoring.
winning
## Inspiration Recent mass shooting events are indicative of a rising, unfortunate trend in the United States. During a shooting, someone may be killed every 3 seconds on average, while it takes authorities an average of 10 minutes to arrive on a crime scene after a distress call. In addition, cameras and live closed circuit video monitoring are almost ubiquitous now, but are almost always used for post-crime analysis. Why not use them immediately? With the power of Google Cloud and other tools, we can use camera feed to immediately detect weapons real-time, identify a threat, send authorities a pinpointed location, and track the suspect - all in one fell swoop. ## What it does At its core, our intelligent surveillance system takes in a live video feed and constantly watches for any sign of a gun or weapon. Once detected, the system immediately bounds the weapon, identifies the potential suspect with the weapon, and sends the authorities a snapshot of the scene and precise location information. In parallel, the suspect is matched against a database for any additional information that could be provided to the authorities. ## How we built it The core of our project is distributed across the Google Cloud framework and AWS Rekognition. A camera (most commonly a CCTV) presents a live feed to a model, which is constantly looking for anything that looks like a gun using GCP's Vision API. Once detected, we bound the gun and nearby people and identify the shooter through a distance calculation. The backend captures all of this information and sends this to check against a cloud-hosted database of people. Then, our frontend pulls from the identified suspect in the database and presents all necessary information to authorities in a concise dashboard which employs the Maps API. As soon as a gun is drawn, the authorities see the location on a map, the gun holder's current scene, and if available, his background and physical characteristics. Then, AWS Rekognition uses face matching to run the threat against a database to present more detail. ## Challenges we ran into There are some careful nuances to the idea that we had to account for in our project. For one, few models are pre-trained on weapons, so we experimented with training our own model in addition to using the Vision API. Additionally, identifying the weapon holder is a difficult task - sometimes the gun is not necessarily closest to the person holding it. This is offset by the fact that we send a scene snapshot to the authorities, and most gun attacks happen from a distance. Testing is also difficult, considering we do not have access to guns to hold in front of a camera. ## Accomplishments that we're proud of A clever geometry-based algorithm to predict the person holding the gun. Minimized latency when running several processes at once. Clean integration with a database integrating in real-time. ## What we learned It's easy to say we're shooting for MVP, but we need to be careful about managing expectations for what features should be part of the MVP and what features are extraneous. ## What's next for HawkCC As with all machine learning based products, we would train a fresh model on our specific use case. Given the raw amount of CCTV footage out there, this is not a difficult task, but simply a time-consuming one. This would improve accuracy in 2 main respects - cleaner identification of weapons from a slightly top-down view, and better tracking of individuals within the frame. SMS alert integration is another feature that we could easily plug into the surveillance system as well, and further compound the reaction improvement time.
## Inspiration Gun violence is a dire problem in the United States. When looking at case studies of mass shootings in the US, there is often surveillance footage of the shooter *with their firearm* **before** they started to attack. That's both the problem and the solution. Right now, surveillance footage is used as an "after-the-fact" resource. It's used to *look back* at what transpired during a crisis. This is because even the biggest of surveillance systems only have a handful of human operators who simply can't monitor all the incoming footage. But think about it: most schools, malls, etc. have security cameras in almost every hallway and room. It's a wasted resource. What if we could use surveillance footage as an **active and preventive safety measure**? That's why we turned *surveillance* into **SmartVeillance**. ## What it does SmartVeillance is a system of security cameras with *automated firearm detection*. Our system simulates a CCTV network that can intelligently classify and communicate threats for a single operator to easily understand and act upon. When a camera in the system detects a firearm, the camera number is announced and is displayed on every screen. The screen associated with the camera gains a red banner for the operator to easily find. The still image from the moment of detection is displayed so the operator can determine if a firearm is actually present or if it was a false positive. Lastly, the history of detections among cameras is displayed at the bottom of the screen so that the operator can understand the movement of the shooter when informing law enforcement. ## How we built it Since we obviously can't have real firearms here at TreeHacks, we used IBM's Cloud Annotation tool to train an object detection model in TensorFlow for *printed cutouts of guns*. We integrated this into a React.js web app to detect firearms visible in the computer's webcam. We then used PubNub to communicate between computers in the system when a camera detected a firearm, the image from the moment of detection, and the recent history of detections. Lastly, we built onto the React app to add features like object highlighting, sounds, etc. ## Challenges we ran into Our biggest challenge was creating our gun detection model. It was really poor the first two times we trained it, and it basically recognized everything as a gun. However, after some guidance from some lovely mentors, we understood the different angles, lightings, etc. that go into training a good model. On our third attempt, we were able to take that advice and create a very reliable model. ## Accomplishments that we're proud of We're definitely proud of having excellent object detection at the core of our project despite coming here with no experience in the field. We're also proud of figuring out to transfer images between our devices by encoding and decoding them from base64 and sending the String through PubNub to make communication between cameras almost instantaneous. But above all, we're just proud to come here and build a 100% functional prototype of something we're passionate about. We're excited to demo! ## What we learned We learned A LOT during this hackathon. At the forefront, we learned how to build a model for object detection, and we learned what kinds of data we should train it on to get the best model. We also learned how we can use data streaming networks, like PubNub, to have our devices communicate to each other without having to build a whole backend. ## What's next for SmartVeillance Real cameras and real guns! Legitimate surveillance cameras are much better quality than our laptop webcams, and they usually capture a wider range too. We would love to see the extent of our object detection when run through these cameras. And obviously, we'd like to see how our system fares when trained to detect real firearms. Paper guns are definitely appropriate for a hackathon, but we have to make sure SmartVeillance can detect the real thing if we want to save lives in the real world :)
## Inspiration After seeing how things unfolded due to crowd crushing during a music concert like Astroworld, and more recently in Itaewon, in South Korea, we were compelled to think of a solution in order to prevent such disasters from happening again. From our observations, there seemed to be a lack of infrastructure to properly perform crowd control. Our team has concluded that these catastrophic events could be prevented with the help of an app which could allow event staff or security guards quickly act when danger arises. ## What it does Using multiple person pose detection, our app tracks people's locations and maps them to a heatmap on the frontend. Using the heatmap, staff can visualize the current state of the location they are looking at. Users can switch between views, which correspond to camera views and locations. The application also allows staff to see the location of their team members and communicate with them via messaging or sending emergency pings. ## How we built it In the back end, we use tensorflow and the MoveNet Lightning model to perform multiple persone pose detection. Our Flask API processes GET requests to send an array of points corresponding to people's locations on the heatmap. Our React front end sends GET requests to the Flask API to receive the coordinates and compute the heatmap. We also use Firestore to store the user's plans and favorites. ## Challenges we ran into The main challenge we ran into was the limitations of MoveNet Lightning. It is a fast model, but it only detects up to 6 people, which does not allow us to detect enough people in the crowd. Using another model, such as OpenPose, happened to be quite challenging and time consuming to set up due to outdated code, and training our own model was also not an option. ## Accomplishments that we're proud of We are proud of having created a large web app where most functionalities are implemented. ## What we learned Some of us learned how to use React for the first time and how to deal with a machine learning model. ## What's next for CrowdSpace We have many ideas to make our app more reliable. By using OpenPose or a trained model, we wish to achieve slightly more reliable results using computer vision. Also, we were already aware of the limitations of computer vision, which is why we think it would be more accurate to use techniques such as Wifi positinioning, bluetooth positioning or GPS positioning.
winning
## The Gist We combine state-of-the-art LLM/GPT detection methods with image diffusion models to accurately detect AI-generated video with 92% accuracy. ## Inspiration As image and video generation models become more powerful, they pose a strong threat to traditional media norms of trust and truth. OpenAI's SORA model released in the last week produces extremely realistic video to fit any prompt, and opens up pathways for malicious actors to spread unprecedented misinformation regarding elections, war, etc. ## What it does BinoSoRAs is a novel system designed to authenticate the origin of videos through advanced frame interpolation and deep learning techniques. This methodology is an extension of the state-of-the-art Binoculars framework by Hans et al. (January 2024), which employs dual LLMs to differentiate human-generated text from machine-generated counterparts based on the concept of textual "surprise". BinoSoRAs extends on this idea in the video domain by utilizing **Fréchet Inception Distance (FID)** to compare the original input video against a model-generated video. FID is a common metric which measures the quality and diversity of images using an Inception v3 convolutional neural network. We create model-generated video by feeding the suspect input video into a **Fast Frame Interpolation (FLAVR)** model, which interpolates every 8 frames given start and end reference frames. We show that this interpolated video is more similar (i.e. "less surprising") to authentic video than artificial content when compared using FID. The resulting FID + FLAVR two-model combination is an effective framework for detecting video generation such as that from OpenAI's SoRA. This innovative application enables a root-level analysis of video content, offering a robust mechanism for distinguishing between human-generated and machine-generated videos. Specifically, by using the Inception v3 and FLAVR models, we are able to look deeper into shared training data commonalities present in generated video. ## How we built it Rather than simply analyzing the outputs of generative models, a common approach for detecting AI content, our methodology leverages patterns and weaknesses that are inherent to the common training data necessary to make these models in the first place. Our approach builds on the **Binoculars** framework developed by Hans et al. (Jan 2024), which is a highly accurate method of detecting LLM-generated tokens. Their state-of-the-art LLM text detector makes use of two assumptions: simply "looking" at text of unknown origin is not enough to classify it as human- or machine-generated, because a generator aims to make differences undetectable. Additionally, *models are more similar to each other than they are to any human*, in part because they are trained on extremely similar massive datasets. The natural conclusion is that an observer model will find human text to be very *perplex* and surprising, while an observer model will find generated text to be exactly what it expects. We used Fréchet Inception Distance between the unknown video and interpolated generated video as a metric to determine if video is generated or real. FID uses the Inception score, which calculates how well the top-performing classifier Inception v3 classifies an image as one of 1,000 objects. After calculating the Inception score for every frame in the unknown video and the interpolated video, FID calculates the Fréchet distance between these Gaussian distributions, which is a high-dimensional measure of similarity between two curves. FID has been previously shown to correlate extremely well with human recognition of images as well as increase as expected with visual degradation of images. We also used the open-source model **FLAVR** (Flow-Agnostic Video Representations for Fast Frame Interpolation), which is capable of single shot multi-frame prediction and reasoning about non-linear motion trajectories. With fine-tuning, this effectively served as our generator model, which created the comparison video necessary to the final FID metric. With a FID-threshold-distance of 52.87, the true negative rate (Real videos correctly identified as real) was found to be 78.5%, and the false positive rate (Real videos incorrectly identified as fake) was found to be 21.4%. This computes to an accuracy of 91.67%. ## Challenges we ran into One significant challenge was developing a framework for translating the Binoculars metric (Hans et al.), designed for detecting tokens generated by large-language models, into a practical score for judging AI-generated video content. Ultimately, we settled on our current framework of utilizing an observer and generator model to get an FID-based score; this method allows us to effectively determine the quality of movement between consecutive video frames through leveraging the distance between image feature vectors to classify suspect images. ## Accomplishments that we're proud of We're extremely proud of our final product: BinoSoRAs is a framework that is not only effective, but also highly adaptive to the difficult challenge of detecting AI-generated videos. This type of content will only continue to proliferate the internet as text-to-video models such as OpenAI's SoRA get released to the public: in a time when anyone can fake videos effectively with minimal effort, these kinds of detection solutions and tools are more important than ever, *especially in an election year*. BinoSoRAs represents a significant advancement in video authenticity analysis, combining the strengths of FLAVR's flow-free frame interpolation with the analytical precision of FID. By adapting the Binoculars framework's methodology to the visual domain, it sets a new standard for detecting machine-generated content, offering valuable insights for content verification and digital forensics. The system's efficiency, scalability, and effectiveness underscore its potential to address the evolving challenges of digital content authentication in an increasingly automated world. ## What we learned This was the first-ever hackathon for all of us, and we all learned many valuable lessons about generative AI models and detection metrics such as Binoculars and Fréchet Inception Distance. Some team members also got new exposure to data mining and analysis (through data-handling libraries like NumPy, PyTorch, and Tensorflow), in addition to general knowledge about processing video data via OpenCV. Arguably more importantly, we got to experience what it's like working in a team and iterating quickly on new research ideas. The process of vectoring and understanding how to de-risk our most uncertain research questions was invaluable, and we are proud of our teamwork and determination that ultimately culminated in a successful project. ## What's next for BinoSoRAs BinoSoRAs is an exciting framework that has obvious and immediate real-world applications, in addition to more potential research avenues to explore. The aim is to create a highly-accurate model that can eventually be integrated into web applications and news articles to give immediate and accurate warnings/feedback of AI-generated content. This can mitigate the risk of misinformation in a time where anyone with basic computer skills can spread malicious content, and our hope is that we can build on this idea to prove our belief that despite its misuse, AI is a fundamental force for good.
# Fake Bananas Fake news detection made simple and scalable for real people. ## Getting Started I would strongly recommend a `conda` environment in order to easily install our older version of TensorFlow. We used `TensorFlow 0.12.1` for backwards compatibility with previous work in the field. Newer versions of TensorFlow may work, but certainly not 'out of the box'. ``` # download and install anaconda # python 3.5 is required for this version of TensorFlow conda create --name FakeBananas python=3.5 NumPy==1.11.3 scikit-learn==0.18.1 TensorFlow==0.12.1 # note: older versions of TF (like 0.10) require less modification to use than newer ones Pandas eventregistry watson_developer_cloud # IBM api signup required py-ms-cognitive # microsoft ``` ## How this works Our fake news detection is based on the concept of ***stance detection***. Fake news is tough to identify. Many 'facts' are highly complex and difficult to check, or exist on a 'continuum of truth' or are compound sentences with fact and fiction overlapping. The best way to attack this problem is not through fact checking, but by comparing how reputable sources feel about a claim. 1. Users input a claim like *"The Afghanistan war was bad for the world"* 2. Our program will search the thousands of global and local news sources for their 'stance' on that topic. 3. We run sources through our Reputability Algorithm. If lots of reputable sources all agree with your claim, then it's probably true! 4. Then we cite our sources so our users can click through and read more about that topic! ### News Sources After combing through numerous newspaper and natural language processing APIs, I discovered that the best way to find related articles is by searching for keywords. The challenge was implementing a natural language processing algorithm that extracted the most relevant keywords that were searchable, and to extract just the right number of keywords. Many algorithms were simply summarizers, and would return well over 50 keywords, which would be too many to search with. On top of that, many algorithms were resource exhaustive and would sometimes take up to a minute to parse a given text. In the end, I implemented both Microsoft’s Azure and IBM’s Watson to process, parse, and extract keywords given the URL to a news article or a claim. I passed the extracted keywords to Event Registry’s incredible database of almost 200 million articles to find as many related articles as possible. With more time, I would love to implement Event Registry’s data visualization capabilities which include generating tag clouds and graphs showing top news publishers given a topic. -@Henry ### Determining Reputation Using a large set of default sources with hard coded reputability, our database of sources continues to become more accurate with each web scraping by adding new sources and articles. To ensure this makes our algorithm better, the weights of each source are adjusted according to how much each new article agrees or disagrees with sources determined to be reputable. In the future, we would love to implement deep learning to further advance this ‘learning’ aspect of our reputability, but the current system more than supplies a proof of concept. -@Josh ### Stance Detection To determine if a claim is true or false, we go out and see where sources which are known to be reputable stand on that issue. We do this by leaning on established machine learning principles used for 'stance detection.' So we: 1. Ask the user to input a claim (which holds a 'stance') on a topic. A claim might be "ISIS has developed the technology to fire missiles at the International Space Station." 2. We search databases, and scrape web pages, to find other articles on that issue. 3. Then run our 'stance detection' machine learning algorithm to determine if reputable sources generally agree or generally disagree with that claim. *If many reputable sources all agree with a claim, then it's probably true!* Our stance detection is run by (Google's Tensorflow)[<https://www.tensorflow.org/>] and our model is built off of the work of the fantastic people at University College London's (UCL) (Machine Reading group)[<http://mr.cs.ucl.ac.uk/>]. -@Kastan ### Frontend/backend info Our backend is written on a Flask python server which connects to our front-end written in JavaScript. ## Other (worse) methods ##### 1. 'Fake News Style' Detection Some teams try to train machine learning models on sets of 'fake' articles and sets of 'real' articles. This method is terrible because fake news can appear in well written articles and vice versa! Style is not equal to content and we care about finding true content. ##### 2. Fact checking Some teams try to granularly check the truth of each fact in an article. This is interesting, and may ultimately be a part of some future fake news detection system, but today this method is not feasible. The truth of facts exists on a continuum and relies heavily on the nuance of individual words and their connotations. The nuances of human language are difficult to parse into true/false dichotomies. 1. Human language is nuanced. Determining a single statement as true or false 2. No databases of what's true or false 3. Many facts in a single article existing on all sides of the truth spectrum -- is that article true or false? ##### 2. 'Fake News Style' Detection Some teams try to train machine learning models on sets of 'fake' articles and sets of 'real' articles. This method is terrible because fake news can appear in well written articles and vice versa! Style is not equal to content and we care about finding true content. ## Team Members * (Kastan Day)[<https://github.com/KastanDay>] * (Josh Frier)[<https://github.com/jfreier1>] * (Henry Han)[<https://github.com/hanksterhan>] * (Jason Jin)[<https://github.com/likeaj6>] ### Acknowledgements (fakenewschallenge.com)[fakenewschallenge.com] provided great inspiration for our project and guiding principles for tackling the task. The University of College London's short paper on the topic: ``` @article{riedel2017fnc, author = {Benjamin Riedel and Isabelle Augenstein and George Spithourakis and Sebastian Riedel}, title = {A simple but tough-to-beat baseline for the {F}ake {N}ews {C}hallenge stance detection task}, journal = {CoRR}, volume = {abs/1707.03264}, year = {2017}, url = {http://arxiv.org/abs/1707.03264} } ```
## Inspiration With the rise of AI-generated content and DeepFakes, it's hard for people to identify what's real and what's fake. This leads to fake news and abuse. After seeing the launch of OpenAI's Sora model this week, we decided to build a solution to verify whether an image is real or AI-generated. ## What it does Aros is an **iOS app that allows you to verify that an image is real and not AI-generated**. It does this by cryptographically proving that you clicked an image on your iPhone, which means that the image is real. This is how it works: 1. When you click a photo using the Aros camera app, Aros uses your iPhone's Secure Enclave to cryptographically sign this image. 2. This signature is posted to the online Aros registry. 3. Anyone can use this signature and your public key to verify that the photo was clicked on your iPhone, and not generated using AI. We also built a **zero-knowledge prover** that verifies the signature on your image within a ZK circuit. This allows any **blockchain to easily verify** that an image is real. ## How we built it This is a system architecture diagram for Aros: ![System architecture diagram](https://hackmd.io/_uploads/SkrLOL126.png) ### Secure Enclave We create a cryptographic key pair in your iPhone's [Secure Enclave](https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys/protecting_keys_with_the_secure_enclave#2930473) to rely on **hardware security** and ensure that your private keys are never leaked outside your iPhone. Aros uses these keys to sign your photos to prove and verify that you clicked them on your iPhone. ### Zero-Knowledge To easily verify the image signatures on a blockchain, we decided to build a ZK verifier for this. We used state-of-the-art cryptographic systems like the **SP1 RISC-V prover** from Succinct Labs to verify the image signatures within a **Plonky3 circuit**. ### iOS App and Web Registry We built the iOS app using **Swift**. The Aros registry is used to store each image's hash and signature, along with users' public keys. It doesn't store the raw image data so we can protect privacy. We built the Aros registry using Next.js, Typescript, and Tailwind CSS. We **deployed the registry dashboard and registry API using Vercel**. ## Challenges we ran into * The Secure Enclave in the iPhone uses the **P-256 elliptic curve** but we found it hard to find a verifier ZK circuit for this curve within Circom or Halo2. So, we decided to use the SP1 RISC-V prover from Succinct Labs to verify the image signatures and generate a Plonky3 circuit. * We faced challenges with **base64 encoding and decoding** the public key. However, we realized that we could use the `base64EncodedString` function in Swift to help with this. ## Accomplishments that we're proud of * It was **our first time developing on iOS and using Swift**, so there was a pretty steep learning curve on the first day. We're really happy that we were able to learn Swift and iOS development over the weekend and successfully build this project. * It was a stretch goal for us to build a zero-knowledge verifier of the P256 signature verification. We're proud that we were able to build this, and now anyone can efficiently verify that an image is real on any blockchain as well. ## What we learned * In terms of technologies, we learned iOS development, Swift, and SwiftUI, and we also learned how to work with RISC-V ZK proving systems like the SP1 prover. * We learned about hardware security, specifically how to protect private keys using the Secure Enclave on iPhones. ## What's next for Aros * We want to extend this technology beyond just images, to **prove that audio and video is real** and not AI-generated. We have some ideas for this and we are excited to try these out soon! * We plan to deploy a **verifier smart contract** for the ZK circuit on Ethereum. * We hope to **work with social media platforms** to try to integrate our system since we think fake news and images are most prevalent on social media, and Aros can help reduce misinformation online.
winning