anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration Originally intended to be a quick way to send a message to someone, but evolved into the solution to your fears of calling. ## What it does Ringman does the calling and the texting for you when you're too nervous to pick up the phone ## How I built it Nexmo API running on node.js server. Client side retrieves phone number and message, node.js server makes nexmo API call to perform the action and return the results to the client. ## Challenges I ran into Phone service was very low in the building, so at times I wasn't sure if it was my code not working or just the call not going through. Getting feedback to the client about the status of their request. ## Accomplishments that I'm proud of Really easy to use and fun to look at ## What I learned How to set up node.js server and how to use nexmo API ## What's next for Ringman Deployment using AWS and a domain host
## Inspiration We felt the world could always use more penguins, so we decided to bring more penguins to the world. ## What it does It spawns penguins. ## How we built it We compiled over 40 images of penguins from the internet. ## Challenges we ran into We fell asleep :( ## Accomplishments that we're proud of We are proud of our vast collection of penguins. Thank u Isaac for finding them and Bradley for writing out their parabolic pathways by hand. ## What we learned Penguins are a gift from God. ## What's next for Penguin Minglin' It's perfect as is
## Inspiration The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way. ## What it does VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others. ## How we built it We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked. ## Challenges we ran into One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios. ## Accomplishments that we're proud of In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them. ## What we learned Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result! ## What's next for VenTalk There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
losing
## Inspiration As STEM students, many of us have completed online certification courses on various websites such as Udemy, Codeacademy, Educative, etc. Many classes on these sites provide the user with a unique certificate of completion after passing their course. We wanted to take the authentication of these digital certificates to the next level. ## What it does Our application functions as a site similar to the ones mentioned earlier; providing users with a plethora of certified online courses, but what sets us apart is our creative use of web3, allowing users to access their certificates directly from the blockchain, guaranteeing their authenticity to the utmost degree. ## How we built it For our frontend, we created out design in Figma and coded it using the Vue framework. Our backend was done in Python via the Flask framework. The database we used to store users and courses as SQLite. The certificate generation was accomplished in Python via the PILLOW library. To convert images in NFTs, we used Verbwire for their easy to use minting procedure. ## Challenges we ran into We ran into quite a few challenges throughout our project. The first of which was the fact that none of us had any meaningful web3 experience . Luckily for us, Verbwire had a quite straightforward minting process and even generated some of the code for us. ## Accomplishments that we're proud of Although our end result is not everything we dreamt of 24 hours ago, we are quite proud of what we were able to accomplish. We created quite an appealing website for our application. We creating a python script that generates custom certificates. We created a powerful backend capable of storing data for our users and courses. ## What we learned For many of us, this was a new and unique collaborative experience in software development. We learned quite a bit on task distribution and optimization as well as key takeaways for creating code that is not only maintainable, but also transferable to other developers during the development process. More technically, we learned how to create simple databases via SQLite, we learned how to automate image generation via Python, and learned the steps of making a unique and appealing front-end design, starting from the prototype all the way to the final product. ## What's next for DiGiDegree Moving forward, we would like to migrate our database to Postgres to handle higher traffic. We would also like to implement a Redis cache to improve hit-ratio and speed up search times. We also like to populate out website with more courses and improve our backend security by abstracting away SQL Queries to protect us further from SQL injection attacks.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
## Team Members * Chaitanya Chaurasia - chait0318 * Jawad Chowdhury - 8et * Daniel Chen - daniel297 * Riyan Ahmed - R.A#7406 ## Inspiration **DocChain**, powered by the robust technology of **blockchain**, is here to alleviate the anxiety and stress associated with **document-intensive processes** like immigration or license renewals, especially in the United States. We recognize how overwhelming it can be to face the fear of losing crucial documents, deal with slow and lengthy processing times, and cope with a lack of transparency. Our solution **streamlines** these processes, ensuring security, efficiency, and clarity every step of the way. ## What it does Our secure, scalable and user-friendly web application **accelerates** processing times, reducing the risk of lost documents with cutting-edge encryption. With Doc Chain's innovative **Web3 solution**, customers can confidently manage submissions worldwide, ensuring their personal information remains confidential and protected. ## How we built it **Blockchain Technology** with **Aleo** and **Solidity**: Doc Chain leverages the power of Aleo and Leo, prominent blockchain technologies, to ensure a secure and transparent document-sharing platform. The decentralized nature of such languages enhance data integrity and privacy throughout the document submission and retrieval processes. Intuitive Frontend with **JavaScript, React, Tailwind CSS, Material UI**, and **Vite**, width NodeJS being used for managing various aspects of the platform. ## Challenges we ran into Working with **Web3** for the first time was extremely difficult but interesting. With the lack of support, Aleo also was difficult to work with. Furthermore, due to family emergency, one of our teammates had to leave midway through the hackathon which led to us having to re-structure and delegate tasks again. ## Accomplishments that we're proud of Reflecting on our experience at the hackathon, it's clear that our initial limited knowledge of Web3 and blockchain technology didn't hinder our progress. Instead, we emerged with a significantly enhanced understanding of how these cutting-edge technologies can be applied. Our project not only showcases the practical applications of these technologies but also stands as a testament to our ability to transform challenges into learning opportunities and innovative solutions. ## What we learned The blockchain is a revolutionary technology, offering a decentralized platform for secure and transparent transactions. In our exploration, we delved into the practical applications of Web3 and blockchain beyond just financial contexts, discovering their broader potential. We focused on understanding how to utilize **Ethereum and the Pinata API** in conjunction with **MetaMask**, exploring the capabilities of Aleo and Solidity in the process. This journey revealed the diverse and far-reaching implications of decentralization in the digital world. ## What's next for DocChain We aim to enhance **accessibility** and **convenience** by introducing a system that allows individuals to securely participate in interviews without the need for long-distance travel. By implementing **zero-knowledge proofs**, we significantly improve the security of these virtual interactions. This initiative is designed as an additional option rather than a replacement for **traditional methods**. It's particularly beneficial for individuals with disabilities, providing them with an easier and more accessible way to engage in important interviews.
partial
### Refer this (<https://youtu.be/Ne9Xw_kj138>) for intro and problem statement. # Brief ### Features 1. Automatic essay grading 2. Facial recognition 3. Text detection from image **It's becoming harder for teachers to mark hundreds of students work within their limited free time hours that they should be using for leisure. It was reported in March 2021 that 84% of teachers feel stressed, which is a shocking realization when these are the people who are supposed to be comforting and teaching the next generation. This is why we created EduMe.ai** --- ![Logo](https://i.imgur.com/rY5IDv7.jpg) This project is especially useful as it allows for moderated grades throughout schools without any bias. Therefore, if needed it is an effective tool to be used if homework based assessment grades need to be assigned. --- # What is EduMe.ai **EduMe.ai** is a social media-based application that aims to connect students and reduce workload for teachers. We identified our problem as teachers being overstressed in their work-life through increasingly complex homework load as well as limited work-life balance. Therefore, we wanted to solve this. We do this by using AI to mark student's homework as well as invigilate online tests. In addition to this, we have created a platform that allows students to communicate privately and share public posts of their work, lives or interests. --- # Step by step # Student 1. Log in with your university id. ![](https://i.imgur.com/2eIPh1v.png) 2. Scan and submit your essay. ![](https://i.imgur.com/xnuenx7.png) 3. Attend Online viva voice test ![](https://i.imgur.com/jRAMxPh.png) 4. Get notification whenever your classmate sends new message . ![](https://i.imgur.com/3NxDRcR.png) 5. Share your work with your classmate. ![](https://i.imgur.com/ici91j4.png) 6. Publish your work or grades in social portal. ![](https://i.imgur.com/p8Pw2qE.png) --- # Teacher 1. See all students and their work assigned in your portal. ![](https://i.imgur.com/WU3FvSe.png) 2. Assign them essay to write on specific topic. ![](https://i.imgur.com/FBCvQX6.png) 3.Use grade assigned by computer (Neural network) or grade manually. ![](https://i.imgur.com/rpLUqbv.png) 3. Assign questions for their viva-voice test. ![](https://i.imgur.com/wVvXhpL.png) --- # Automatic essay grading Essays are paramount for of assessing academic excellence along with linking the different ideas with the ability to recall but are notably time-consuming when they are assessed manually. Manual grading takes a significant amount of evaluator's time and hence it is an expensive process. Artificial Intelligence systems provide a lot to the educational community where graders have to face different kinds of difficulties while rating student writings. Analyzing student essays in abundance within given time limit, along with feedback is a challenging task. But with changing times, human-written (not handwritten) essays are easy to evaluate with the help of AEG systems. # Facial recognition Face detection using Haar cascades is a machine learning-based approach where a cascade function is trained with a set of input data. OpenCV already contains many pre-trained classifiers for face, eyes, smiles, etc.. Today we will be using the face classifier. You can experiment with other classifiers as well. # Text detection from image We used google cloud Vision API that can detect and extract text from images. There are two annotation features that support optical character recognition (OCR). --- # Creation Process # UI/UX To start our project properly, we decided to create a rough plan of what we wanted and where in order to visualize the outcome of the project. Here are a few pictures of what we designed using Figma. # Frames ![BB](https://i.imgur.com/GuQuDPo.jpg) # Visual Designs ![](https://i.imgur.com/FIJ7dit.png) --- # How are we a social media application? **The Google definition of social media "websites and applications that enable users to create and share content or to participate in social networking."** We designed our application in a way that allows students to connect through an experience they mutually share - school. We would class our project as social media as it does allow students to talk and spark conversations whilst, having the freedom to post whatever they want that entails their education. # How does this impact society? Teachers are arguably the largest group of individuals who make social change. However, with their mental health declining and education gradually becoming harder and more competitive, efficiency and productivity are just not the same as they used to be. We hope, to bring back this productivity via taking work of teacher's hands and creating a centralized place for marking and moderated communication between students. --- # Our Key Takeaways ### Technologies that we used : ![Languages](https://i.imgur.com/qxPwEfz.png) ### Accomplishments that we're proud of We are happy that we were able to complete this highly complex project within the limited time-space. We truly believe that our project has huge potential to create a new era of education that helps teachers with their work-life balance as well as helping students to give advice to each other and help each other out. ### What we learned We have learned that communication is key, when undertaking a huge project such as this. ### What's next for EduMe.ai Our system has a lot of versatility but to even start effectively in the future, we plan to implement our system virtually into small schools to see its effects on student progress as well as teacher's mental health. We also plan to make our application a safer place by filtering comments to make sure that no bullying or rude language takes place as it is a tool that is made for school children. --- ## References 1. <https://github.com/mankadronit/Automated-Essay--Scoring> 2. iOS assets on Figma: <https://www.figma.com/file/ne0DGAm1tBVYegnXhD5NO7/Educreate.ai-Student-View> 3. iOS assets on Figma : <https://www.figma.com/file/qOOrCUIJck5biWzXs0651L/Untitled?node-id=0%3A1> ---
## Inspiration Our inspiration came from how we are all relatively new drivers and terrified of busy intersections. Although speed is extremely important to transport from one spot to another, safety should always be highlighted when it comes to the road because car accidents are the number one cause of death in the world. ## What it does When the website is first opened, the user is able to see the map with many markers indicating the fact that a fatal collision happened here. As noted in the top legend, the colours represent the different types of collision frequency. When the user specifies an address for the starting and ending location, our algorithm will detect the safest route in order to avoid all potentially dangerous/busy intersections. However, if the route must pass a dangerous intersection, our algorithm will ultimately return it back. ## How we built it For the backend, we used Javascript functions that took in the latitude and longitude of collisions in order to mark them on the Google Map API. We also had several functions to not only check if the user's path would come across a collision, but also check alternatives in which the user would avoid that intersection. We were able to find an Excel spreadsheet listing all Toronto's fatal collisions in the past 5 years and copied that into a SQL database. That was then connected to Google SQL to be used as a public host and then using Node.js, data was then taken from it to mark the specified collisions. For the frontend, we also used a mix of HTML, CSS, Javascript and Node.js to serve the web app to the user. Once the request is made for the specific two locations, Express will read the .JSON file and send information back to other Javascript files in order to display the most optimal and safest path using the Google Map API. To host the website, a domain registered on Domain.com and launched by creating a simple engine virtual machine on Compute Engine. After creating a Linux machine, a basic Node.js server was set up and the domain was then connected to Google Cloud DNS. After verifying that we did own our domain via DNS record, a bucket containing all the files was stored on Google Cloud and set to be publicly accessible. ## Challenges we ran into We have all never used Javascript and Google Cloud services before, so challenges that kept arising was our unfamiliarity with new functions (Eg. callback). In addition, it was difficult to set up and host Domain.com since we were new to web hosting. Lastly, Google Cloud was challenging since we were mainly using it to combine all aspects of the project together. ## Accomplishments that We're proud of We're very proud of our final product. Although we were very new to Javascript, Google Cloud Services, and APIs, my team is extremely proud of utilizing all resources provided at the hackathon. We searched the web, as well as asked mentors for assistance. It was our determination and great time management that pushed us to ultimately finish the project. ## What we learned We learned about Javascript, Google APIs, and Google Cloud services. We were also introduced to many helpful tutorials (through videos, and online written tutorials). We also learned how to deploy it to a domain in order for worldwide users to access it. ## What's next for SafeLane Currently, our algorithm will return the most optimal path avoiding all dangerous intersections. However, there may be cases where the amount of travel time needed could be tremendously more than the quickest path. We hope to only show paths that have a maximum 20-30% more travel time than the fastest path. The user will be given multiple options for paths they may take. If the user chooses a path with a potentially dangerous intersection, we will issue out a warning stating all areas of danger. We also believe that SafeLane can definitely be expanded to first all of Ontario, and then eventually on a national/international scale. SafeLane can also be used by government/police departments to observe all common collision areas and investigate how to make the roads safer.
## Inspiration While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression. While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class. With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies. ## What it does Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming. ## How we built it * Wireframing and Prototyping: Figma * Backend: Java 11 with Spring Boot * Database: PostgresSQL * Frontend: Bootstrap * External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer * Cloud: Heroku ## Challenges we ran into We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.) ## Accomplishments that we're proud of Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.* ## What we learned As our first virtual hackathon, this has been a learning experience for remote collaborative work. UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see. # What's next for Reach If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
partial
## What it does This project aims to generate detailed interview question prompts based on the user's input of job title, description, and require job skills. It can take a file of the user's verbal answer, turn it into text, and process it to be analyzed. ## How we built it We used Google Cloud's Speech-To-Text and Storage API to create and store transcript. We also used Cohere AI's API to create a prompt and generate interview questions ## Challenges we ran into We ran into trouble creating our API endpoints and creating a consistent Cohere prompt that will give us good interview questions. ## Accomplishments that we're proud of We're proud of being able to work together to figure out how to use technologies that we have never used before. ## What we learned We learned new technologies, such as Google Cloud's APIs, Cohere's APIs, and building a web application using Flask ## What's next for Spinter We hope to make it work and refine it further.
## Inspiration Our inspiration for this project came from the issue we had in classrooms where many students would ask the same questions in slightly different ways, causing the teacher to use up valuable time addressing these questions instead of more pertinent and different ones. Also, we felt that the bag of words embedding used to vectorize sentences does not make use of the sentence characteristics optimally, so we decided to create our structure in order to represent a sentence more efficiently. ## Overview Our application allows students to submit questions onto a website which then determines whether this question is either: 1. The same as another question that was previously asked 2. The same topic as another question that was previously asked 3. A different topic entirely The application does this by using the model proposed by the paper "Bilateral Multi-Perspective Matching for Natural Language Sentences" by Zhiguo et. al, with a new word structure input which we call "sentence tree" instead of a bag-of-words that outputs a prediction of whether the new question asked falls into one of the above 3 categories. ## Methodology We built this project by splitting the task into multiple subtasks which could be done in parallel. Two team members worked on the web app while the other two worked on the machine learning model in order to our expertise efficiently and optimally. In terms of the model aspect, we split the task into getting the paper's code work and implementing our own word representation which we then combined into a single model. ## Challenges Majorly, modifying the approach presented in the paper to suit our needs was challenging. On the web development side, we could not integrate the model in the web app easily as envisioned since we had customized our model. ## Accomplishments We are proud that we were able to get accuracy close to the ones provided by the paper and for developing our own representation of a sentence apart from the classical bag of words approach. Furthermore, we are excited to have created a novel system that eases the pain of classroom instructors a great deal. ## Takeaways We learned how to implement research papers and improve on the results from these papers. Not only that, we learned more about how to use Tensorflow to create NLP applications and the differences between Tensorflow 1 and 2. Going further, we also learned how to use the Stanford CoreNLP toolkit. We also learned more about web app design and how to connect a machine learning backend in order to run scripts from user input. ## What's next for AskMe.AI We plan on finetuning the model to improve its accuracy and to also allow for questions that are multi sentence. Not only that, we plan to streamline our approach so that the tree sentence structure could be seamlessly integrated with other NLP models to replace bag of words and to also fully integrate the website with the backend.
## Inspiration An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills. ## What it does Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report. ## How we built it This Interface was built using the MERN stack In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components. ## Challenges we ran into * Learning to work with AssemblyAI * Storing files and sending them over an API * Managing large amounts of data given from an API * Organizing the API code structure in a proper way ## Accomplishments that we're proud of • Creating a streamlined Artificial Intelligence process • Team perseverance ## What we learned • Learning to work with AssemblyAI, Express.js • The hardest solution is not always the best solution ## What's next for Interviewy • Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback. • Creating an API for analyzing the video and the gestures of the the interviewees
losing
## Inspiration: The BarBot team came together with such diverse skill sets. We wanted to create a project that would highlight each member's various expertise. This expertise includes Internet of Things, hardware, and web development. After a long ideation session, we came up with BarBot. This robotic bartender will serve as a great addition to forward-seeing bars, where the drink dispensing and delivery process is automated. ## What it does: Barbot is an integrated butler robot that allows the user to order a drink through a touch screen ordering station. A highly capable drink dispensary system will perform desired option chosen from the ordering station. The beverage options that will be dispensed through the dispensary system include Redbull or Soylent. An additional option in the ordering station named "Surprise Me" is available. This particular option takes a photograph of the user and runs a Microsoft Emotions API. Running this API will allow BarBot to determine the user's current mood and decide which drink is suitable for this user based on the photograph taken. After the option is determined by the user (Redbull or Soylent) or by BarBot ("Surprise Me" running Microsoft emotion API), BarBot's dispensary system will dispense the determined beverage onto the glass on the BarBot. This robot will then travel to the user in order to deliver the drink. Barbot will only return to its original position (under the dispensary station) when the cup has been lifted. ## How we built it: The BarBot team allocated different tasks to group members with various expertise. Our frontend specialist, Sabrina Smai, built the mobile application for ordering station that has now had a touchscreen. Our hardware specialist, Lucas Moisuyev, built the BarBot itself along with the dispensary system with the assistance of Tony Cheng. Our backend specialist, Ben Weinfeld, built the ordering station by programming raspberry pi and the touchscreen. Through our collaboration, we were able to revolutionize the bartending process. ## Challenges we ran into: The most reoccurring issue we encountered was a lack of proper materials for specific parts of our hack. When we were building our pouring mechanism, we did not have proper tubing for transferring our beverages, so we had to go out and purchase materials. After buying more tubing materials, we then ran into the issue of not having strong enough servos or motors to turn the valves of the dispensers. This caused us to totally change the original design of the pouring mechanism. In addition, we underestimated the level of difficulty that came with creating a communication system among all of our parts. ## Accomplishments that we're proud of: Despite our challenges, we are proud to have been able to create a functional product within the limited amount of time. We needed to learn new skills and improvise hardware components but never gave up. ## What we learned: During this hackathon, we learned to program the Particle Photon Raspberry Pi, building web apps, and leap over the hurdles of creating a hardware hack with very limited supplies. ## What's next for BarBot: The BarBot team is very passionate about this project and we will continue to work on BarBot after this Hackathon. We plan to integrate more features that will incorporate more Microsoft APIs. An expansion of the touch ordering station will be considered as more variety of drink options will be required.
## Inspiration DeliverAI was inspired by the current shift we are seeing in the automotive and delivery industries. Driver-less cars are slowly but surely entering the space, and we thought driverless delivery vehicles would be a very interesting topic for our project. While drones are set to deliver packages in the near future, heavier packages would be much more fit for a ground base vehicle. ## What it does DeliverAI has three primary components. The physical prototype is a reconfigured RC car that was hacked together with a raspberry pi and a whole lot of motors, breadboards and resistors. Atop this monstrosity rides the package to be delivered in a cardboard "safe", along with a front facing camera (in an Android smartphone) to scan the faces of customers. The journey begins on the web application, at [link](https://deliverai.github.io/dAIWebApp/). To sign up, a user submits webcam photos of themselves for authentication when their package arrives. They then select a parcel from the shop, and await its arrival. This alerts the car that a delivery is ready to begin. The car proceeds to travel to the address of the customer. Upon arrival, the car will text the customer to notify them that their package has arrived. The customer must then come to the bot, and look into the camera on its front. If the face of the customer matches the face saved to the purchasing account, the car notifies the customer and opens the safe. ## How we built it As mentioned prior, DeliverAI has three primary components, the car hardware, the android application and the web application. ### Hardware The hardware is built from a "repurposed" remote control car. It is wired to a raspberry pi which has various python programs checking our firebase database for changes. The pi is also wired to the safe, which opens when a certain value is changed on the database. \_ note:\_ a micro city was built using old cardboard boxes to service the demo. ### Android The onboard android device is the brain of the car. It texts customers through Twilio, scans users faces, and authorizes the 'safe' to open. Facial recognition is done using the Kairos API. ### Web The web component, built entirely using HTML, CSS and JavaScript, is where all of the user interaction takes place. This is where customers register themselves, and also where they order items. Original designs and custom logos were created to build the website. ### Firebase While not included as a primary component, Firebase was essential in the construction of DeliverAI. The real-time database, by Firebase, is used for the communication between the three components mentioned above. ## Challenges we ran into Connecting Firebase to the Raspberry Pi proved more difficult than expected. A custom listener was eventually implemented that checks for changes in the database every 2 seconds. Calibrating the motors was another challenge. The amount of power Sending information from the web application to the Kairos API also proved to be a large learning curve. ## Accomplishments that we're proud of We are extremely proud that we managed to get a fully functional delivery system in the allotted time. The most exciting moment for us was when we managed to get our 'safe' to open for the first time when a valid face was exposed to the camera. That was the moment we realized that everything was starting to come together. ## What we learned We learned a *ton*. None of us have much experience with hardware, so working with a Raspberry Pi and RC Car was both stressful and incredibly rewarding. We also learned how difficult it can be to synchronize data across so many different components of a project, but were extremely happy with how Firebase managed this. ## What's next for DeliverAI Originally, the concept for DeliverAI involved, well, some AI. Moving forward, we hope to create a more dynamic path finding algorithm when going to a certain address. The goal is that eventually a real world equivalent to this could be implemented that could learn the roads and find the best way to deliver packages to customers on land. ## Problems it could solve Delivery Workers stealing packages or taking home packages and marking them as delivered. Drones can only deliver in good weather conditions, while cars can function in all weather conditions. Potentially more efficient in delivering goods than humans/other methods of delivery
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
## Commander Commander, the command launcher for the web. Install it for Firefox [here](https://addons.mozilla.org/en-US/firefox/addon/commander/) and press F2 or Ctrl+E to open it! ## Motivation Having to switch between the keyboard and mouse is painful and wastes time. Commander is an extension to control your browser with just your keyboard and/or your voice. Enhance your productivity. ## What it does Commander is a browser extension for Chrome and Firefox. Features: * view and add bookmarks * open and close tabs * search and open links * VOICE CONTROL * .... and more
## Inspiration Almost everyone has felt the effects of procrastination at least one time in their life. Procrastination pushes projects and assignments back, leaving little to no time to finish the project. One of the largest parts of an assignment is the research, which requires tons of reading in order to connect and create a meaningful product. The process of reading takes time, dependent on the individual's reading speed. Reading speed is something that can be cut down with practice, and where we gained inspiration for this project. We wanted to help improve the monotonous process of reading through pages and blurbs of text, and thus this chrome extension was made! ## What it does Our app has two main features. The first feature helps provide the user with more information, to enhance their reading experience. It does this by providing the user with information about how many words are being displayed on the webpage at a current time, and also how many minutes it would approximately take (for the average) human to read through it, to provide a sense of scale. The second feature is designed to help the user improve their reading speed. They are given a list of 100 words, and are instructed to time themselves to discover their own reading speed. The user can then use this statistic to track their progress, and also as an incentive to improve. ## How we built it We used a chrome extension to create this app. It utilizes HTML, CSS, JS. HTML and CSS are used to display the content to the user, while JS was used to provide functionality to the elements presented in the HTML and CSS. We began by building a basic HTML template to test things on, before implementing both features and applying them to our full HTML and CSS presentation. ## Challenges we ran into Many members of our group were completely brand new to web-development and subsequently, chrome extensions. Within a span of 2 days, many of us learned how to code a working HTML, CSS, and JS application from scratch. Thus, a lot of our time was spent learning the ropes on what it means to create a meaningful chrome extension from the ground up! One thing we had particular trouble with was with JS. Already being unfamiliar with the language, we had trouble understanding the concept of promises and asynchronous code inside JS, where it is often needed in chrome extensions. As a result, our communications between functions were often incorrect, leaving many variables with crucial information undefined. ## Accomplishments that we're proud of We're proud that we finished what we wanted in time! All of us are novices to hackathons, and are glad we managed to finish our project to a level we were happy with. Additionally, many of us learned HTML, CSS, and JS within a span of 2 days, well enough to make an entire chrome extension, which we found quite surprising ourselves! ## What we learned We learned how to combine HTML, CSS, and JS to make a chrome extension! We also learned a lot about asynchronous programming, which allowed us for easier communication from function to function. Finally, we also learned and practiced the use of proper planning, creating templates and ideas well before implementing them in our code editors, allowing us to continually build upon our project, rather than constantly restarting and redeveloping. ## What's next for ReadMore Expand on our reading speed practice, providing more methods and more engaging ways to practice reading quickly! (e.g instead of reading words quickly for practice, you have the option to read sentences quickly instead) Another way we could improve the user experience for practicing reading quickly could be providing insights and history for user performance, allowing the user to look back in time to see previous graphs and statistics about their reading speeds, providing further motivation!
## Inspiration Nobody wants to pay $1.00 for an avocado one week, and $1.25 for the same avocado the very next week. Unfortunately, this is not a rare occurrence -- due to the widespread problem of cargo theft in the trucking industry, driving up the prices for consumer goods (as well as the larger supply chain). Trucking companies would be lucky to benefit from a system where robbers could no longer steal cargo easily. **Lucky-Trucky can help**. ## What it does Lucky-Trucky offers a **proactive** *and* **reactive** solution to the issue of cargo theft. We help trucking companies make an effective contingency plan (bio-metric security permissions) to proactively prevent robbers from stealing trucks along with precious cargo. Additionally, if the proactive measure does not apply (in certain situations), we have a reactive crisis management system to deal with such circumstances. We created an intelligent API that allows the user to configure a GPS device, and send it data. Based on this data, the API uses its AI logic to determine whether the truck is being stolen and returns that information to the user. ## How we built it Lucky-Trucky consists of an API using GPS data to detect cargo thefts, and an RFID system increasing vehicle security. The AI logic for the API was developed by using a python script to analyze the XML data-set provided by Canada Cartage, and determine what factors indicate a theft. The API is built using Express JS and has an interface built using Vue JS. The RFID component uses a scanner to detect the required card, flipping a switch. ## Challenges we ran into Our Arduino board was incompatible withe RFID scanner, so alternative methods had to be used to connect the components. Also, the data-set was a little difficult to analyze and very time-consuming. ## Accomplishments that we're proud of Creating the API and the AI logic. Also, managing to configure the RFID scanner. ## What we learned Learned how to configure Arduino components and create effective APIs ## What's next for TireTrax Get better at detecting theft
losing
## Inspiration Eve the robot from WALL-E for its intelligence. ## What it does It serves as a virtual assistant to help EV drivers plan their trip ahead, and will understand your habits and knows how to improve their driving experience. In case they need to stop somewhere on the road for charging, it also provides tentative plan to spend good quality time like drinking coffee, seeing the scenery or having a lunch, depending on the time of the day, season and region. ## How we built it We used Node.js and Express to facilitate our backend framework, making sure the structure is maintanable according the MVC design. For searching and requesting routes, nearby location of various types, we chose Google Map APIs, and for interpreting the human speech input to further automate the assistant, we used Cohere APIs for speech to text, and intention detection process. ## Challenges we ran into Harnessing the complexity of APIs, and how to simplify the client-server side communication. ## Accomplishments that we're proud of We have successfully integrated natural language processing into our solution, to efficiently identify user intentions, and direct the following steps to execute driven by AI. ## What we learned * Product design that solves problems for a certain group of people. Make good use of the resources you have that aligns with the given scenario. * Using R3E to integrate 3D model with NextJS. ## What's next for Hello Eve Continue to integrate AI and make improvements on giving advice that is more ad-hoc and considering.
## Inspiration As more and more people embrace the movement toward a greener future, there still remain many concerns surrounding the viability of electric vehicles. In order for complete adoption of these greener technologies, there must be incentives toward change and passionate communities. ## What it does EVm connects a network of electric vehicle owners and enables owners to rent their charging stations when they aren't needed. Facilitated by fast and trustless micropayments through the Ethereum blockchain, users can quickly identify nearby charging stations when batteries are running low. ## How we built it Using Solidity and the Hardhat framework, smart contracts were deployed to both a localhost environment and the Goerli testnet. A React front-end was created to interact with the smart contract in a simple and user-friendly way and enabled a connection to a metamask wallet. A Raspberry Pi interface was created to demonstrate a proof of concept for the interaction between the user, electric vehicle, and charging station. While the actual station would be commercially manufactured, this setup provides a clear understanding of the approach. The Raspberry Pi hosted a Flask server to wirelessly communicate data to the web-app. An LCD display conveys the useful metrics so the user can rest assured that their interaction is progressing smoothly. ## Challenges we ran into This was our first experience in the blockchain development space. Not only learning the syntax of Solidity, but gaining an understanding of the major underlying blockchain concepts made for a steep learning curve in little time. Configuration with Hardhat did not go smoothly and required a great deal of debugging. Integrating the hardware with the web-app and smart contracts through the Flask REST API required extensive testing and modification. ## Accomplishments that we're proud of Building our first dApp was a huge accomplishment in itself. Our ambition to connect two of the most rapidly emerging fields, IoT and Blockchain, sparked new creativity in areas that are still very complex and unknown. ## What we learned We not only learned how to write and deploy efficient smart contracts to the Ethereum network, but also saw how they can be integrated into user-friendly web-apps. Building EVm also provided us an opportunity to develop modular, low-level software by learning more about interrupt-driven design as well as various serial communication protocols. ## What's next for EVm We look forward to deploying the smart contracts on actual blockchain networks. To improve transaction times and minimize gas fees, layer 2 chains will be explored to host the project in the future. Extensive testing and refactoring will be done to augment the security and efficiency. Reaching out to industry leaders to make the product more viable will be essential for its adoption.
## Inspiration Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier. ## What it does SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less. ## How we built it We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript. ## Challenges we ran into The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using. ## Accomplishments that we're proud of Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end. ## What we learned ## What's next for SharkFin
partial
Try it out at <https://covid-aware-301221.web.app/>. ## Description This app provides an effective and quick way to implement contact tracing for local businesses and busy locations. After an initial sign in with Google, all it takes is a quick QR scan to record the locations someone has visited. The app will notify the user if a location they have visited has had a covid-19 case in the past 14 days. ## The Problem * Contact tracing is difficult * Previous solutions are tedious or ineffective * Contact tracing is difficult * Information is not easily accessible by the general public ## The Solution * Have customers scan a QR code upon entering a venue * Leaving their contact information * Notifies you if you have had a case and notifies a venue if they have had one * Quick and efficiently records visited locations * Make information more transparent and accessible ## Features Implemented * Login / Registration for users * Searching / QR scanning upon entering a venue For the Future * Finalize database access * Map for visualization of COVID cases * Analytics for users and venues of their number of customers and visits ## Resources Used * React, `react-router-dom`, `react-qr-reader` for frontend * Firebase for backend: Firebase Hosting + Firestore
## Inspiration We were inspired by the development of foldscope, a very low-cost, high-resolution foldable microscope (<https://en.wikipedia.org/wiki/Foldscope>) capable of imaging blood cells. We wanted to create tools that can integrate with this microscope and, more generally, other applications and better improve health in the developing world. Malaria is a leading cause of death in many developing countries, where blood smears are used to identify the presence of parasites in red blood cells (RBCs). To improve the efficiency of detecting parasitic cells in blood smears, ultimately speeding up malaria diagnosis, we aimed to create an online tool that can do this in minutes. ## What it does The project allows a user to upload a thin blood smear image, and it classifies the RBCs in the blood that are infected with malaria parasites. ## How we built it We utilized a thin blood smear dataset from 193 patients in a Bangladesh hospital curated by the NIH. It consists of 20,000 labeled cells (exhibiting malaria parasitic infection or not) across 965 blood smear images. Given these images, we performed a multi-step image segmentation procedure to isolate the red blood cells (RBCs): we first used U-Net to segment the blood smears into cell clusters, then used Faster R-CNN to segment the cell clusters into individual RBCs, and then incorporated thresholding techniques to refine the segmentation and smooth the edges. Once each RBC in every blood smear was individually segmented, we trained a CNN to classify whether these segmented images contained the malaria parasite. Mapping these now-labeled segmented images back to their parent blood smears allowed us to output modified blood smear images highlighting the RBCs containing a malaria parasite. ## Challenges we ran into We encountered challenges in building an effective RBC segmentation pipeline. Variations in the segmentation procedure greatly affected the classification performance of the CNN, which was somewhat surprising. The various segmentation methodologies we explored yielded segments that looked visually very similar to hand-drawn segmentations provided in the NIH dataset, and these hand-drawn segmentations were classified very well by CNN. We tried integrating various thresholding, grayscale manipulations, filtering, and flood-filling methodologies to integrate with the U-Net + R-CNN for RBC segmentation. In addition, we originally started with pre-trained models like ResNet-18 for classification. However, they tended to overfit the training data, so we opted for a simple, untrained one-layer CNN architecture, which worked the best. ## Accomplishments that we're proud of We are proud that we were able to build a comprehensive segmentation and classification pipeline and that we were able to integrate this into a full-stack web app with a front end and a back end. ## What we learned We learned many technical skills along the way, such as using Python’s OpenCV framework for image processing/manipulation, various image segmentation methodologies, and using flask to build out the web app. ## What's next for Plasmodium In the future, we hope to continue developing our app and refining our segmentation/classification methodologies to increase accuracy. Furthermore, we plan to expand our pipeline to other diseases, such as sickle cell anemia, to create a more comprehensive health diagnostic tool for the developing world.
## Inspiration When the first experimental COVID-19 vaccine became available in China, hundreds of people started queuing outside hospitals, waiting to get that vaccine. Imagine this on a planetary scale when the whole everybody has to be vaccinated all around the world. There's a big chance while queuing they can spread the virus to people around them or maybe get infected because they cannot perform social distancing at all. We sure don't want that to happen. The other big issue is that there are lots of conspiracy theories, rumors, stigma, and other forms of disinformation simultaneously spread across our social media about COVID-19 and it's vaccine. This misinformation creates frustrations for users many asking, we really don't know which one is right? Which one is wrong? ## What it does Immunize is a mobile app that can save your life and save your time. The goal is to make the distribution of mass-vaccination become more effective, faster, and less crowded. With this app, you can book your vaccine appointment based on your own preference. So the user can easily choose the hospital based on the nearest location and easily schedule an appointment based on their availability. In addition, based on the research we found that most of Covid-19 vaccines requires 2 doses given in 3 weeks apart to achieve that high effectiveness. And there's a big probability that people can forget to return for a follow-up shot. We can minimize that probability. This app will automatically schedule the patient for the 2nd vaccination so there is a less likelihood of user error. The reminder system (as notification feature) that will remind them in their phone when they have appointment that day. ## How we built it We built the prototype using flutter as our client to support mobile. We integrated radar.io for hospital search. For facial recognition we used GCP and SMS reminders we used twilio. The mobile client connected to firebase: using firebase for auth, firebase storage for avatars and firestore for user metadata storage. The second backend host used datastax. ## Challenges we ran into Working with an international team was very challenging with team members 12+ hours apart. All of us were learning something new whether it was flutter, facial recognition or experimenting with new APIs. Flutter APIs were very experimental, the camera API had to be rolled back two major version which occurred in less than 2 months to find a viable working version compatible with online tutorials ## Accomplishments that we're proud of The features: 1. **QR Code Feature** for storing all personal data + health condition, so user don't need to wait for a long queue of administrative things. 2. **Digital Registration Form** checking if user is qualified of COVID-19 vaccine and which vaccine suits best. 3. **Facial Recognition** due to potential fraud in people who are not eligible for vaccination attempting to get limited supplies of vaccine, we implemented facial recognition to confirm the user for the appointment is the same one that showed up. 4. **Scheduling Feature** based on date, vaccine availability, and the nearby hospital. 5. **Appointment History** to track all data of patients, this data can be used for better efficiency of mass-vaccination in the future. 6. **Immunize Passport** for vaccine & get access to public spaces. This will create domino effect for people to get vaccine as soon as possible so that they can get access. 7. **Notification** to remind the patients every time they have appointment/ any important news via SMS and push notifications 8. **Vaccine Articles** - to ensure the user can get the accurate information from a verified source. 9. **Emergency Button** - In case there are side effects after vaccination. 10. **Closest Hospitals/Pharmacies** - based on a user's location, users can get details about the closest hospitals through Radar.io Search API. ## What we learned We researched and learned a lot about the facts of COVID-19 Vaccine; Some coronavirus vaccines may work better in certain populations than others. And there may be one vaccine that seems to work better in the elderly than in younger populations. Alternatively, one may work better in children than it works in the elderly. Research suggests, the coronavirus vaccine will likely require 2 shots to be effective in which taken 21 days apart for Pfizer's vaccine and 28 days apart for Moderna's remedy. ## What's next for Immunize Final step is to propose this solution to our government. We really hope this app could be implemented in real life and be a solution for people to get COVID-19 vaccine effectively, efficiently, and safely. Polish up our mobile app and build out an informational web app and a mobile app for hospital staff to scan QR codes and verify patient faces (currently they have to use the same app as the client)
losing
# Value Proposition How do you inspire action within your community? Petitions never really work, and there seems to be no motivating drive to incentivize attacking issues that are at the core of our society's problems. We decided to center our project around the United Nations' 17 sustainable global development initiatives, and allow our users to tackle a project that they are truly passionate about. As the world’s only truly universal global organization, the United Nations has become the foremost forum to address issues that transcend national boundaries and cannot be resolved by any one country acting alone. To its initial goals of safeguarding peace, protecting human rights, establishing the framework for international justice and promoting economic and social progress, in the seven decades since its creation the United Nations has added on new challenges, such as climate change, refugees and AIDS. While conflict resolution and peacekeeping continue to be among its most visible efforts, the UN, along with its specialized agencies, is also engaged in a wide array of activities to improve people’s lives around the world – from disaster relief, through education and advancement of women, to peaceful uses of atomic energy. StepWise addresses this issue by gamifying the experience of a web application that walks users through achieving their service projects, no matter the scale, on a step-by-step basis. Even though StepWise is accessible to users spanning from all demographics, our target audience for this platform is the younger generation (millennials, Gen Z) who feel frustrated by the state of things and are empowered to change the status quo. Our project integrates awareness, community, and gives students a platform to empower them to continue their goals. Together we can make a positive difference in world-- one step at a time. # Further Development In continuation with this project, we would like to integrate a point system to allow users to competitively track their goals compared to other members in the StepWise community to drive progress. Growth and further development in our project would mean initiating partnerships with corporate sponsors and nonprofits to fund the incentivization of these initiatives.
## Inspiration Pranit and Sashank are a part of Boy Scouts, and we both spend a lot of time helping our community and giving back. We have had personal accounts with the organizations we aim to help. Krish is also a local advocate for sustainability, he regularly looks for opportunities to help. We saw a need and we thought of a cool way to process and collect data and wanted to build it out. Who wouldn’t want to be earning Money and feeling like you're playing a game? ## What it does Datability is an app and web platform that gamifies and incentivizes crowdsourcing data. The problem we are trying to solve is simple: sustainable organizations struggle to receive the data they need because it is either scattered, expensive, or extremely difficult to access. Without this data, organizations can’t take action where their efforts make the most difference. Datability works by allowing organizations to request data and we crowdsource data from users by gamifying and incentivizing the process. Organizations give data requests and give us information on what data they want to capture and from where. Next, users in the geofence of the challenge are eligible to participate and can upload data they collect. In turn they get points and real money. At the end of it, organizations get actionable data and users get to compete with each other and make real money all while helping the environment. Our business model is simple too, the top 3 contributing users of a challenge get 35% of the pot and the rest is proportionality distributed to all contributors using this formula. We take a small cut from the payouts offered by Organizers and we want to give 50% of our profits back to sustainable efforts ## How we built it We built 2 platforms. One geared toward organizations and the other toward the public. And an IOS App for the users, and the web app for non-profits/business. Our tech stack included Swift, Swift UI, Firebase, Google Cloud APIs, Plant Identification, HuggingFace ML Models, NLP, Javascript, CSS, HTML, Bootstrap, Apexcharst, Google Maps API, Apple Maps On the organization side, we streamlined the onboarding process and added a smooth Stripe Setup. Using multithreading we can get immediate API responses and dynamic updating on the screen. We allow organizations to enable geofencing and location boundaries. On top of that we provide advanced analytics with Google Maps APIs as well as data aggregations and collections. We also have the cool feature of exporting all the data into a JSON format. On the consumer side, we were able to create an easy-to-use application that allowed easy interaction and understanding of all of its features. We were able to create NLP based descriptions for each of the plants to make sure everyone learns something from sustainability. All of the user data/images/coordinates were uploaded to our database to make sure that the web app could easily interact with that data too. ## Challenges we ran into This was our 4th Hackathon, but first time competing on a college campus. We went through multiple phases of ideation and several technical challenges. For example, when we were first discussing how to implement a psp (payment service provider) into our platform we exhaustively went through all the potential options before landing back on stripe as the optimal solution for us. Though these discussions could be seemingly tedious at times, it taught us a great deal on why planning is incredibly important, especially in computer science. The lack of sleep is also always a challenge :) ## Accomplishments that we're proud of We have a working product that with a few tweaks would be beta launch ready. That means we wrote a whole lot of code this weekend and are happy to see the final product. We’re proud of making a full stack iOS & Web application that works with various parts of Machine Learning and other APIs to deliver an impactful yet clean user interface. In addition, we are proud that the iOS and the web app seamlessly work together with no difficulties to make the overall platform better. ## What we learned We learned a whole lot. From several technical challenges and countless debugging hours, we learned the ins and outs of Swift/Swift UI, integrating stripe into web payments and mobile payouts, Google Maps API… ## What's next for Datability Over the weekend we loved the process of building and ideating over Datability. We want to continue so we are going to start a Beta Launch in just 2 weeks. Yes that’s right we are doing a beta launch in two weeks in the Bay Area community. We want to continue iterating on our tech and get it ready for scale. We look forward to what’s next.
## Inspiration It's an infamous phenomena that riddles modern society - camera galleries filled with near-similar group and self portraits. With burst shutter shots and the general tendency to take multiple photos of a gathered group out of fear that one image may be cursed with a blink, misdirected gaze, or perhaps even an ill-conceived countenance, our team saw a potential tool to save people some time, and offer new ways of thinking about using their camera. ## What it does This app can either take a series of image urls or a Facebook album's id, and parse the images with Azure's Face Cognitive service to determine the strength of the smile and general photo quality. The app then returns the same series of images, sorted from "best" to "least," in accordance to Microsoft's algorithms regarding blurriness, happiness, and size of smile. ## How we built it We built the app on a NodeJS server and immediately began working on learning about how to prepare data for the Azure cognitive surfaces. This web server runs express to quickly deploy the app, and we used Postman repeatedly to troubleshoot API calls. Then, we hosted the web server on Google's cloud platform to deploy the dynamic site, and with that site we used Facebook's graph API to collect user images upon entering an album ID. The front end itself takes its design from Materialize. ## Challenges we ran into One of the main sources of troubleshooting was working with very particular image urls. For Azure's cognitive services to take the image files, they must be urls of images already hosted on the internet. We spent a while thinking about how to overcome this, as Google Photos images were not reliably returning data from the Azure service, so instead we used Facebook albums. Additionally, we never really got to figure out which features are best correlated with picture quality, and instead arbitrarily chose blurriness and happiness as a stand-in for picture quality. ## Accomplishments that we're proud of Getting the album to display user information was amazing, and connecting our pipes between our server infrastructure and Microsoft's cognitive service was extremely awarding. We were also proud of being allowed to bulk compare photos with Facebook's API. ## What we learned How to handle tricky AJAX calls, and send more tricky header calls to retain information. We also learned about the variety of web hosting platforms in the area, and took our first foray into the world of Computer Vision! ## What's next for FotoFinder Integration with Google Photos, customized ML models for image quality, and an open source tool for a project like this so other companies can simply use the idea with a public API.
partial
## Inspiration * Students who have ADHD * Traditional web content can overwhelm them with dense text and distracting visuals * We wanted to create a tool that simplifies reading for students with ADHD by allowing them to control their reading experience ## What it does * Customizable text options like font size, line spacing, and font family to enhance readability. * A summarization tool powered by GPT, which turns lengthy paragraphs into concise bullet points for easier comprehension. * A toggle feature for distracting images or advertisements, allowing users to concentrate solely on the text. ## How we built it * We built Focus using HTML, CSS, JavaScript, and the GPT API for summarization * Chrome extension interacts with web pages using content scripts to adjust the appearance of text and provide a more ADHD-friendly reading experience * We used the GPT API to implement a summarization feature that converts complex paragraphs into digestible bullet points, helping users focus on the most important information. * Uses Chrome's scripting API to apply font and layout changes on the fly, and all controls for customization and summarization are conveniently placed in a user-friendly popup interface. ## Challenges we ran into * Ensuring that the extension applied the reading adjustments seamlessly across different websites, each with varying structures and styles * We also encountered some complexity when integrating the GPT API for real-time summarization while maintaining a responsive and smooth user experience * Optimizing the extension for students with ADHD required careful attention to detail in terms of user interface and accessibility to make sure it was distraction-free yet feature-rich. ## Accomplishments that we're proud of * Successfully integrating the GPT API to deliver real-time bullet-point summaries of paragraphs, which is a valuable tool for students with ADHD who need quick and clear information * Overall simplicity and ease-of-use of the extension, which provides users with powerful tools to customize their reading experience in a few simple clicks. ## What we learned * Importance of user-centered design, especially when creating tools for people with ADHD * Reinforced the need to simplify complex information and to offer a range of customization options so users can tailor their experience based on their individual needs * Gained experience working with APIs, content scripts, and Chrome extension development, learning how to interact with live web content, and understanding the technical and user-experience challenges of building accessibility tools ## What's next for Focus * Eye-tracking or behavior-based focus detection to automatically adjust content based on where users are focusing. * Multi-language support for summarization, catering to a broader audience of students worldwide. * Content filtering for removing distracting or irrelevant text elements. * Improved AI summarization that can extract key takeaways or specific types of information like definitions or explanations, based on user preferences.
## Inspiration As a rower, I've found that my performance on the erg greatly fluctuates with the music being played. Whenever I'm in sync with the tempo at a given stroke rating, my erg scores are fantastic; out of sync, not so much. Striving to consistently perform my best during erg practices, I figured that developing a program that plays stroke-rate specific music would not only enable me to score better, but make workouts more fun. We built ergDJ to do this; based on the specific stroke rating for a workout, the athlete is able to play specially formulated playlists with songs at a tempo to match the pace of their workout. Winter training is often a psychologically challenging time; we're operating much more on an individual-basis, we're not able to get out on the water, and it's us vs. the erg. We often dread practices; having music as a tool to face this often-perceived challenge is critical to not only get faster and stronger in the winter, but better ourselves as rowers. Our hope is that ergDJ not acts as a training tool, but helps make the time spent on the erg more bearable. ## What it does ## How I built it I used CSS and HTML to build the front end of this webapp with some help from bootstrap. Akshat, worked with Spotify API and angular.js to link the API to the website. ## Challenges I ran into Neither of us knew JavaScript and I didn't know CSS or HTML. We weren't able to get GitHub to integrate our work, so we had to merge our code manually which was cumbersome. ## Accomplishments that I'm proud of I finally did my first hackathon! I also learned two languages! And I learned a lot more about entrepreneurship! Akshat is really proud that he learned javascript and angularjs! ## What I learned I learned CSS and HTML to do the front end work of this project! I also learned a bit about Machine Learning. ## What's next for ergDJ Instead of just having a single stroke rating per workout, being able to shift between playlists/stroke ratings on timed intervals and distances.
## Inspiration snore or get pourd on yo pores Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us ## What it does It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go. ## How we built it We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face. ## Challenges we ran into Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it. ## Accomplishments that we're proud of Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects. ## What we learned We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again. ## What's next for You snooze you lose. We dont lose Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out
losing
## Inspiration VibeCare was inspired by our team's common desire for a sense of health, well-being, and general good feelings among young people today - Vibes! ## What it does VibeCare provides a unique twist on a health app by incentivizing users to log their habits through the implementation of game design mechanics. Logging your habits rewards you with CareCoins, which you can use to buy Vibes, small pets that keep you company and support the progress of your healthy lifestyle! ## How we built it We built VibeCare using Processing, a Java-based IDE that provides tools for developing graphics and visual structure. Our development process started off with brainstorming ideas until we decided we wanted to focus on a health hack that could improve people's well-being. From there, we decided on Processing as our development platform based on our team's strengths and experience, and planned out the structure of our project through storyboarding and lots of whiteboard drawings! ## Challenges we ran into One of the main challenges that we ran into from the start of the project was learning how to effectively use Processing to implement the ideas in our project, as most of our team was not experienced in using the software. Another big challenge keeping the project focused on the original goal of health and well-being, trying to prevent scope creep off of our main idea. ## Accomplishments that we're proud of We are all extremely proud that we were able to design an application that incorporates both a strong focus on improving health and well being, as well as a strong aesthetic and value on audience appeal. ## What we learned Throughout the development of VibeCare, we learned a lot about aspects of product design, teamwork, and how to work through a large project through the delegation of various tasks. ## What's next for VibeCare We designed VibeCare with scalability and expansion in mind. In the future, VibeCare could expand to different platforms such as web and iOS to make the app available to as many people as possible. Other ways of interacting with the app, like voice input, and personalized features, such as colour-blind accommodation, could contribute to increased accessibility. VibeCare has great potential for content expansion as well, such as tracking a user's health statistics, and implementing a tracker for environmental impact.
## Inspiration --- Through extensive research into health management applications and a comprehensive competitor analysis, it became evident that these applications predominantly emphasize physical health and are often overly generalized, lacking broad applicability. Subsequent discussions between our development team and colleagues underscored the vulnerability of developers to an unhealthy work environment. In light of these findings, we have undertaken the initiative to introduce a service that is holistic in its approach, encompassing three key dimensions of health: physical, mental, and overall well-being. Our objective is to offer developers constructive guidance and recommendations to enhance their health and well-being. This initiative represents a tailored and enhanced iteration of wearable health applications, akin to industry-leading solutions such as Apple Health and Google Fit. ## What it does --- This application leverages data from the Terra API to furnish users with an accurate assessment of their current physical, mental, and overall well-being status. Moreover, it offers actionable recommendations designed to be seamlessly integrated into developers' everyday lives, with the overarching aim of inspiring and motivating users to adopt and sustain a healthier lifestyle. ## How we built it --- Node.js with the express library was used to build the Backend; React, Figma, HTML, and CSS were used to build the front end and design UIUX. While MongoDB had not been completely integrated into the code at this stage, the application took a proactive approach by segregating data collection into two distinct components: one housing the user's raw data and the other storing the analyzed final status. This bifurcation was a deliberate measure aimed at safeguarding user privacy while still furnishing managers with the requisite information for effective team management. ## Challenges we ran into --- Within a constrained timeframe, the necessity to complete a product development project posed a significant challenge for our team. This was further exacerbated by the need to utilize tools and technologies with which we had limited prior experience, thereby compounding the time constraint issue. ## Accomplishments that we're proud of --- What we take the utmost pride in is the exceptional design of the service, a testament to the remarkable work of Atmiya and Maxwell. Equally, the project's value is derived from the successful utilization of previously unexplored skills for Min and Omar, underscoring our ability to tackle new challenges and achieve the project's completion. ## What we learned --- Through the hack Harvard, we could learn about the following -the power of peer mentorship -how teamwork can boost productivity -the way to understand new skills in a short time period ## What's next for DevHealth --- Regrettably, due to the constraints of time, we were unable to realize all the desired functions we had planned for. Nevertheless, we remain hopeful for the opportunity to further develop this service. Our envisioned future enhancements include: Building a web application tailored for team managers to streamline their interaction with the service. Creating an iOS application to enhance accessibility and provide timely notifications to users. Developing a watchOS application to gather a broader spectrum of data not typically collected by mainstream wearables like the Apple Watch and Fitbit, among others.
## Inspiration Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate. ## What it does We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal. ## How we built it Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript Backend: Python,Javascript Server side> Nodejs, Passport js Database> MongoDB( for user login), MySQL(for mood based music recommendations) ## Challenges we ran into Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked . But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally. ## Accomplishments that we're proud of Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions. We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor. Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging. ## What we learned We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists. ## What's next for Umang While the core functionality of our app is complete, it can of course be further improved . 1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress. 2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement. This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
losing
## 💭 Inspiration Throughout our Zoom university journey, our team noticed that we often forget to unmute our mics when we talk, or forget to mute it when we don't want others to listen in. To combat this problem, we created speakCV, a desktop client that automatically mutes and unmutes your mic for you using computer vision to understand when you are talking. ## 💻 What it does speakCV automatically unmutes a user when they are about to speak and mutes them when they have not spoken for a while. The user does not have to interact with the mute/unmute button, creating a more natural and fluid experience. ## 🔧 How we built it The application was written in Python: scipy and dlib for the machine learning, pyvirtualcam to access live Zoom video, and Tkinter for the GUI. OBS was used to provide the program access to a live Zoom call through virtual video, and the webpage for the application was built using Bootstrap. ## ⚙️ Challenges we ran into A large challenge we ran into was fine tuning the mouth aspect ratio threshold for the model, which determined the model's sensitivity for mouth shape recognition. A low aspect ratio made the application unable to detect when a person started speaking, while a high aspect ratio caused the application to become too sensitive to small movements. We were able to find an acceptable value through trial and error. Another problem we encountered was lag, as the application was unable to handle both the Tkinter event loop and the mouth shape analysis at the same time. We were able to remove the lag by isolating each process into separate threads. ## ⭐️ Accomplishments that we're proud of We were proud to solve a problem involving a technology we use frequently in our daily lives. Coming up with a problem and finding a way to solve it was rewarding as well, especially integrating the different machine learning models, virtual video, and application together. ## 🧠 What we learned * How to setup and use virtual environments in Anaconda to ensure the program can run locally without issues. * Working with virtual video/audio to access the streams from our own program. * GUI creation for Python applications with Tkinter. ## ❤️ What's next for speakCV. * Improve the precision of the shape recognition model, by further adjusting the mouth aspect ratio or by tweaking the contour spots used in the algorithm for determining a user's mouth shape. * Moving the application to the Zoom app marketplace by making the application with the Zoom SDK, which requires migrating the application to C++. * Another option is to use the Zoom API and move the application onto the web.
## Inspiration Badminton boosts your overall health and offers mental health benefits. Doing sports makes you [happier](https://www.webmd.com/fitness-exercise/features/runners-high-is-it-for-real#1) or less stressed. Badminton is the fastest racket sport. The greatest speed of sports equipment, which is given acceleration by pushing or hitting a person, is developed by a shuttlecock in badminton Badminton is the second most popular sport in the world after football Badminton is an intense sport and one of the three most physically demanding team sports. For a game in length, a badminton player will "run" up to 10 kilometers, and in height - a kilometer. Benefits of playing badminton 1. Strengthens heart health. Badminton is useful in that it increases the level of "good" cholesterol and reduces the level of "bad" cholesterol. 2. Reduces weight. 3. Improves the speed of reaction. 4. Increases muscle endurance and strength. 5. Development of flexibility. 6. Reduces the risk of developing diabetes. Active people are 30-50% less likely to develop type 2 diabetes, according to a 2005 Swedish study. 7. Strengthens bones. badminton potentially reduces its subsequent loss and prevents the development of various diseases. In any case, moderate play will help develop joint mobility and strengthen them. ![](https://i.imgur.com/Fre5CiD.png) However, the statistics show increased screen time leads to obesity, sleep problems, chronic neck and back problems, depression, anxiety and lower test scores in children. ![](https://www.nami.org/NAMI/media/NAMI-Media/Infographics/NAMI_MentalHealthCareMatters_2020_th-734.png) With Decentralized Storage provider IPFS and blockchain technology, we create a decentralized platform for you to learn about playing Badminton. We all know that sports are great for your physical health. Badminton also has many psychological benefits. ## What it does Web Badminton Dapp introduces users to the sport of Badminton as well as contains item store to track and ledger the delivery of badminton equipment. Each real equipment item is ledgered via a digital one with a smart contract logic system in place to determine the demand and track iteam. When delivery is completed the DApp ERC1155 NFTs should be exchanged for the physical items. A great win for the producers is to save on costs with improved inventory tracking and demand management. Web Badminton DApp succeeds where off-chain software ledgering system products fail because they may go out of service, need updates, crash with data losses. Web Badminton DApp is a very low cost business systems management product/tool. While competing software based ledgering products carry monthly and or annual base fees, the only new costs accrued by the business utilizing the DApp are among new contract deployments. A new contract for new batch of items only is needed every few months based on demand and delivery schedule. In addition, we created Decentralized Newsletter subscription List that we connected to web3.storage. ## How we built it We built the application using JavaScript, NextJS, React, Tailwind Css and Wagmi library to connect to the metamask wallet. The application is hosted on vercel. The newsletter list data is stored on ipfs with web3.storage. The contract is built with solidity, hardhat. The polygon blockchain mumbai testnet and lukso l14 host the smart conract. Meanwhile the Ipfs data is stored using nft.storage.
## Inspiration At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment. ## What It Does reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients. ## How We Built It To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data. ## Challenges We Ran Into Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe. However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team. ## Accomplishments That We're Proud Of Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience. ## What We Learned This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication. ## What's Next for reFresh There are a number of features that we would like to implement going forward. Possible avenues of improvement would include: * User accounts to allow ingredients and plans to be saved and shared * Improvement in our search to fetch more mainstream and relevant recipes * Simplification of ingredient selection page by combining ingredients and meals in one centralized page
winning
## Inspiration We can make customer support so, so much better, for both customers and organizations alike. This project was inspired by the frankly terrible wait times and customer support information that the DriveTest centres across Ontario have. ## What it does From a high-level, our platform integrates analytics of both in-person and online customer support lines (using computer vision and Genesys's API, respectively), and uses those to provide customers real-time data for which customer support channel to utilize at a given time. It also provides organizations with analytics and metrics to be able to optimize their customer support pipelines. Speaking about the internals, our platform utilizes computer vision to determine the number of people in a line at any given moment, then uses AI to calculate the approximate waiting time for that line. This usecase is meant for in-person customer support interactions. Our platform also uses Genesys's Developer API (EstimateWaitTime) to calculate, for any given organization and queue, the wait time and backlog of customer support cases. It then combines these two forms of customer support, allowing customers to make informed decisions as to where to go for customer support, and giving organizations robust analytics for which customer support channels can be further optimized (such as hiring more people to serve as chat agents). ## How we built it OpenCV along with a custom algorithm for people counting within a certain bounding area was used for the Computer Vision aspect to determine the number of people in a line, in-person. This data is sent to a Flask server. We also used Genesys's API along with simulated Genesys customer-agent interactions to determine how long the wait time is for online customer support. From the Flask server, this data goes to two different front-ends: 1. For customers: customers simply see a dashboard with the wait time for online customer support, as well as the wait times at nearby branches of the company (say, Ontario DriveTest centres) – created using Bulma and Vanilla JS 2. For organizations: organizations see robust analytics regarding wait times at certain intervals, certain points in the day, etc. They can also compare and contrast online and in-person customer support analytics. Organizations can use these metrics to optimize customer support to reduce the load on certain employees, and by making customer support more efficient for customers. ## Challenges we ran into Working with many services (Computer Vision + Python, Flask backend, Vanilla JS frontend, Vue.js frontend) was a challenge, since we had to find a way to pass the data from one service to another, reliably. We decided to fix this by using a key-value store for redundancy to ensure data is not lost through numerous layers of transmission. ## Accomplishments that we're proud of Creating a working product using Genesys's API! ## What we learned The opportunity that lies within the field of customer support and unifying both online and in-person components of it. Also, the opportunities that Genesys's API holds in terms of empowering organizations to make their customer support as efficient as possible. ## What's next for QuicQ We wanted to use infrared sensors instead of cameras to detect people in a line in-person, due to privacy concerns, but we couldn't find infrared sensors for this hackathon! So, we will integrate them in a future version of QuicQ.
# Ether on a Stick Max Fang and Philip Hayes ## What it does Ether on a Stick is a platform that allows participants to contribute economic incentives for the completion of arbitrary tasks. More specifically, it allows participants to pool their money into a smart contract that will pay out to a specified target if and only if a threshold percentage of contributors to the pool, weighted by contribution amount, votes that the specified target has indeed carried out an action by a specified due date. Example: A company pollutes a river, negatively affecting everyone nearby. Residents would like the river to be cleaned up, and are willing to pay for it, but only if the river is cleaned up. Solution: Residents use Ether on a Stick to pool their funds together that will pay out to the company if and only if a specified proportion of contributors to the pool vote that the company has indeed cleaned up the river. ## How we built it Ether on a Stick implements with code a game theoretical mechanism called a Dominant Assurance Contract that coordinates the voluntary creation of public goods in the face of the free rider problem. It is a decentralized app (or "dapp") built on the Ethereum network, implementing a "smart contract" in Serpent, Ethereum's Python-like contract language. Its decentralized and trustless nature enables the creation of agreements without a 3rd party escrow who can be influenced or corrupted to determine the wrong user. ## Challenges The first 20 hours of the hackathon were mostly spent setting up and learning how to use the Ethereum client and interact with the network. A significant portion was also spent planning the exact specifications of the contract and deciding what mechanisms would make the network most resistant to attack. Despite the lack of any kind of API reference, writing the contract itself was easier, but deploying it to Ethereum testnet was another challenge, as large swaths of the underlying technology hasn't been built yet. ## What's next for Ether on a Stick We'd like to take a step much closer to a game-theoretically sound system (don't quote us, we haven't written a paper on it) by implementing a sort of token-based reputation system, similar to that of Augur. In this system, a small portion of pooled funds are set aside to be rewarded to reputation token bearing oracles that correctly vote on outcomes of events. "Correctly voting" means voting with the majority of the other randomly selected oracles for a given event. We would also have to restrict events to only those which are easily and publically verifiable; however, by decoupling voting from contribution, this bypasses a Sybil attack wherein malicious actors (or the contract-specified target of the funds) can use a large amount of financial capital to sway the vote in their favor.
## Inspiration We help businesses use Machine Learning to take control of their brand by giving them instant access to sentiment analysis for reviews. ## What it does Reviews are better when they are heard. We scrape data from Yelp, run the reviews through our ML model and allow users to find and access these processed reviews in a user-friendly way. ## How we built it For the back-end, we used Flask, Celery worker, and Dockers, and TensorFlow for our machine learning model. For the front-end, we used React, bootstrap and CSS. We scraped the yelp data and populated it to a local MongoDB server. We perform periodic Celery tasks to process the scraped data in the background and save the sentiment analysis in the database. Our TensorFlow model is deployed on the GCP AI Platform and our backend uses the specified version. ## Challenges we ran into * Learning new technologies on the fly during the day of the hackathon. Also, commutation barriers and deployment for machine learning model * Training, building and deploying a machine learning model in a short time * Scraping reviews in mass amounts and loading them to the db * Frontend took a while to make ## Accomplishments that we're proud of * To get a working prototype of our product and learn a few things along the way * Deploy a machine learning model to GCP and use it * Set up async workers in the backend * Perform sentiment analysis for over 8.6 million reviews for almost 160,000 businesses ## What we learned * Deploy ML models * Performing async tasks on the backend side ## What's next for Sentimentality * Provide helpful feedback and insights for businesses (actionable recommendations!). * Perform more in-depth and complex sentiment analysis, and the ability to recognize competitors. * Allow users to mark wrong sentiments (and correct them). Our models aren't perfect, we have room to grow too! * Scrape more platforms (Twitter, Instagram, and other sources, etc.) * Allow users to write a review and receive sentimental analysis from our machine learning model as feedback * Allow filtering businesses by location and/or city
partial
## Inspiration The Office Of The President of the University of Ottawa challenged us to think about the social in online learning. Our team always found that the best learning is done gathered around a whiteboard in a group, so we committed to replicating that environment in a virtual all-in-one space ## What it does BeyondBe provides online spaces where multiple users can join a study room and have access to a shared collaborative whiteboard space and markdown-rendered notepad. ## How we built it BeyondBe is built using Javascript and Solace event broker. Express.js was used for the backend server, and marker was used for real-time rendering of markdown. Auth0 is also used for client authentication ## Challenges we ran into We faced several challenges while building this application. The first of many was configuring Auth0 to correctly work with our application as none of us were familiar with it. The synchronizing of drawing on the whiteboard and typing of text among rooms was also a challenge. Our initial implementation made use of web sockets, which worked excellently until it came time to deploy the application where we discovered web sockets have a bad habit of not working in production. It was at that time that we made the pivot to using Solace Event Broker's as we had previously researched that they could provide an alternative to web sockets. This worked great, and also allowed us to implement more functionality with Solace's hierarchical topic structure. ## Accomplishments that we're proud of We are proud of the immense amount of progress that we are able to complete in such a short period of time. Additionally, many of the techologies (and techniques) we made use of were new to the team, so we are also very proud of how quick we were able to adapt and quickly we were able to pivot our ideas when we encountered roadblocks. ## What we learned javascript, solace, express.js, web hosting, web security, git conflict resolution... ## What's next for BeyondBe If given the opportunity, we'd like to implement many more features into BeyondBe, some of these include * Voice communication using WebRTC * Multiple pen colors for the whiteboard * Class profiles * And much more!
## 💡 Inspiration💡 According to the City of Toronto, "contaminated recycling is currently costing the City millions annually. Approximately one third of what is put in the Blue Bin doesn’t belong there or was ruined as a result of the wrong items being put into the bin." Missing out on being able to recycle one third of what is put in the Blue Bin is huge, especially when this issue can be solved by spreading more actionable awareness to the city's residents. Furthermore, Toronto cannot be the only place in the world that is having these issues. Thus, it becomes increasingly important for us to become mindful citizens of the world we inhabit, for the benefit of our communities and most importantly, for the wildlife and world around us! ## ? What is PlanetPal ? PlanetPal is a gamified recycling app that is designed to promote good recycling habits and spread more awareness about recycling correctly. Users subscribe to the app monthly, paying upwards of $10 per month, where every time they recycle, they build progress to recovering the money put towards their description. Everytime the user recyles something, they earn Green tokens (our exclusive currency), which can be redeemed for real world money. Furthermore, completing monthly challenges and consistently recycling awards users with a limited monthly challenge badge that displays their dedication to the environment. By collecting these badges, users will have the opportunity to earn even more tokens! Users are given recycling instructions when they take a picture of their trash, their trash is classified with a CNN into 6 categories. The user is then told how to recycle the item that they are holding. After disposing of the item, the user gains progress towards the monthly challenge, as well as tokens. ## 🔧 How we built it 🔧 Our front end mobile application is developed with React Native and Expo, using packages such as React Native Paper to speed up the development process. Additionally, we used React Navigation for smooth UI transitions, and Expo Camera to take photos. Our back end is built with Python and Flask, which hosts our CNN that classifies images of trash into 6 different categories. We manage the player's progress, badges, and in-game currency, as well as classify the images passed from the front end mobile app. Moreover, the logic behind generating advice on proper recycling lies here, where we passed the classified object into Cogenerate's command-nightly generative AI model. Our machine learning model is a CNN transfer learning model built on VGG19, a model trained on ImageNet. We chose to build on VGG19 in the interest of time, while guaranteeing relatively high accuracy. We used a Kaggle dataset, Garbage Classification, to train the VGG19 model and fine tune it. Our model classifies images correctly at around 82.31%! ![traintest](https://github.com/markestiller/PlanetPal/assets/117526873/4e3cc498-c984-4c5b-8030-dc684c5c2e0d) ## 🤔 Challenges we ran into 🤔 Since this was our first time ever training a machine learning model, we initially decided to train our model directly on Kaggle, where we had access to cloud GPUs. Unfortunately, we realized that we could not actually download our model! We also ran into issues in terms of figuring out how to implement our machine learning model into our backend code so that we could actually run it to classify new images that it has not seen before. Since we did not have unlimited processing power, training the model also took significant time. For some of our team members, this project marked their first experience with React Native and Flask. This added another layer of complexity to the development process as they were learning and adapting to these technologies on the fly. ## 🏆 Accomplishments that we're proud of 🏆 Despite our team's limited experience in mobile app development, we are proud to have successfully created a functional and aesthetically pleasing UI. We are also extremely proud to say that our machine learning model is able to identify recyclable materials to a fairly high percentage of accuracy. ## 🤓 What we learned 🤓 Our team was split into 3 separate roles: frontend, backend and machine learning model. All of our members decided to work with a Framework that they had not used before, or develop something completely brand new. Specifically, one of our members spent many hours researching machine learning models, before being able to implement, and connect our own model to the backend of our project. Other members had the opportunity to experience the mobile app development process. Overall, each member of the team was able to learn something new from this project. ## 👀 What's next for PlanetPal 👀 We are currently looking at ways to incentivize users to consistently recycle. Implementing actual challenges such as recycling a certain amount of materials each month, or a daily streak mechanism would help users stay engaged. In order to improve our application, a database containing usernames, emails, and tokens would help this app be more accessible on multiple platforms. Another aspect we are considering is optimizing the app's interface for various screen sizes, ensuring a seamless user experience whether they're accessing it from a smartphone, tablet, or other mobile devices.
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
losing
## Inspiration I like to analyze stuff ## What it does It uses rev.ai api to get a transcript of your conversation with someone. After that, it highlights important keywords and computes the sentiment over time. ## How I built it I used rev.ai streaming api for speech to text. I also used python and spacy to detect the keywords. ## Challenges I ran into Streaming in node is hard ## Accomplishments that I'm proud of Steaming in node works
## Inspiration Transitioning from high school to college was pretty difficult for us, and figuring out which classes we needed to complete as prerequisites for upper-division classes turned out to be quite the hassle given the vast size of both high school and college course catalogs. We wanted to find a way to simplify the process of planning our schedules, so here we are. ## What it does The backend provides detailed descriptions of what prerequisite classes you need to take in order to be eligible to enroll in a particular class, and whether or not it is possible to do so given your timeframe until graduation. The frontend provides a chatbot that helps guide you in an interactive way, providing both academic guidance and a charming personality to talk to. ## How we built it * We realized we could represent courses and their prerequisites as directed acyclic graphs, so to sift through finding the shortest paths of getting from a prerequisite course to our desired course in a schedule, we utilized a reverse post-order depth-first search traversal (aka a topological sort) to parse through raw data provided in a tidy JSON format. * We expanded by using "assumption"-based prerequisite clearances — i.e., eduVia will assume a student has also completed CS 61A if they input CS 61B (61A's sequel course) as completed. * To put icing on the backend cake, we also organized non-conflicting classes into semester-/year-based schedules optimized for expediting graduation speed. * Instead of using the vanilla HTML/CSS to design our website, we utilized a more modern framework, React.js, to structure our web application. * For the front end, we implemented the React-Bootstrap library to make our UI look fancy and crisp. * We also used an open-source chatbot library API to implement our academic assistant Via which will in theory help high school students plan out their four years. ## Challenges we ran into * Figuring out how to effectively parse through a JSON file while simultaneously implementing an efficient topological sort was quite the steep learning curve at first. * There were a slew of issues around the AM times with effectively clearing out *all* prerequisite classes based on a more advanced class - for example, we couldn't directly get rid of Pre-Calculus if we listed AP Calculus BC as a completed course, since PreCalc isn't a direct prereq of AP Calculus BC. * Building out that semester-/year-based schedule was TOUGH! Values would aggregate together when they shouldn't and you'd have semesters where you're taking *every* class at once. It became near-impossible to build out after merging two different "class paths" together since the sorting became wonky and unusable. * We were also fairly new to web development so it took a while to get used to the React.js framework and figure out the ins and outs. Given more time, we definitely could have made more progress on this project. ## Accomplishments that we're proud of It was really rewarding to be able to implement a clean and efficient topological sort successfully, and there was a special joy in getting data to be displayed just the way we wanted it to be. Learning how to use complex Python data structures, JavaScript, React, and several APIs (albeit to varying degrees of return on investment) on the fly was extremely thrilling. ## What we learned JavaScript may be friendly, but React is your true friend. And Python is your BFF. And Google is just <3 ## What's next for eduVia * Better integration between the frontend and backend. * Implementing an AI-powered chatbot (Co:here, anyone?) that can utilize browser cookies to remember conversations with users and provide better academic feedback. * Providing full-fledged personalized 4-Year Plans for students based on their academic preferences, utilizing feature engineering and machine learning to weight the student's subject likings and make a schedule of classes suitable to their stated interests. * Minor bug fixes.
## Inspiration Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings. ## What it does Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language. ## How we built it ### Visual Studio Code/Front End Development: Sovannratana Khek Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality. ### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way. In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once. ### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website. ### Solidworks/Product Design Engineering: Riki Osako Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging. ### Figma/UI Design of the Product: Riki Osako Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end. ## Challenges we ran into Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking. Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework. Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency. Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german ## Accomplishments that we're proud of Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short. ## What we learned As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well. ## What's next for Untitled We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days. From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures. We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with. From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
losing
## Inspiration: During the COVID-19 pandemic, people were stuck in their homes. As social beings, many people started suffering from depression, anxiety, and other mental health problems. Speaking up about mental health is still stigmatized, and we must address this issue more seriously. Later, OpenAI came up with their own chatbots that were creating wonders! People were shocked all over the world considering its capabilities. So, we got inspired to create a project based on AI and pitch our project idea. ## What it does It is an AI-powered chatbot that acts as a friend as well as a counselor for addressing mental health problems and helping people overcome them. ## How we built it We used the ChatGPT 3.5 Turbo Engine to create this Chatbot. We connected our backend to the OpenAI's servers and fine-tuned the model using custom datasets. ## Challenges we ran into We ran into several challenges; one of them was AI was not able to remember the previous chat history. We also encountered problems related to the integration of Whisper API so we could interact with our AI model by voice. We are still working on it, and we plan to overcome it like other tough challenges ## Accomplishments that we're proud of We were able to make a memory of the previous interactions. ## What we learned backend and frontend development API connection Fine-tuning ai models creating data sets ## What's next for MindTher Collaboration with mental health organizations and foundations to gather better data sets. Build mobile applications. Better User interface
## Inspiration Hearing a Huawei representative discuss the issues of mental health resonated strongly with our group. Younger generations are increasingly affected by mental health issues, especially as many of us experience the impacts of isolation and loneliness due to COVID-19. We were also inspired by the challenge of creating something ‘warm and fuzzy’. We wanted to create a fun and lighthearted solution that attempts to combat some of these issues by providing one with a friend for whatever mood they are in. ## What it does This website provides users with a “fuzzy friend”, a chatbot tailored to provide feel-good suggestions based on the user’s emotions. Users are introduced to 6 fuzzy friends, each with their own personalities, which they can then select based on their mood. Different bots offer various responses to messages, whether links to videos, jokes, or friendly commentary. ## How we built it We built this project using Python’s ChatterBot and Flask frameworks, and HTML. We worked on both front-end and back-end development, using Bootstrap to help with the front-end. We also attempted to use Heroku as a server to link the two components. ## Challenges we ran into Our main struggle was connecting our website to the server, so that users without Flask or ChatterBot could still use our FuzzyFriend chat service. We were new to GitHub and struggled with merging our front and back-end components without creating conflict. We also struggled to download and import the libraries necessary for our website to work, such as Flask. Building an AI chatbot capable of improving itself is also something that is harder than it seems! ## Accomplishments that we're proud of We managed to build a clean, user-friendly looking website that matches our warm, friendly theme. Another accomplishment was working with new libraries and improving our technical skills, such as learning to use Flask and how to collaborate on github. As our first hackathon ever, and first collaborative project, learning how to use github and creating common code that would work on all of our computers is an accomplishment that we are proud of. There were a lot of new components to this project, but we were all able to adapt and work with them to create what we wanted. Finally, we are proud of our teamwork. We were all incredibly supportive, determined and helpful through each step in the process; we were organized in planning our project, assigning and delegating different tasks, as well as quick to help eachother out. ## What we learned We learned how to collaborate on GitHub, how to implement libraries and frameworks we were not familiar with, and how to have fun coding together! ## What's next for FuzzyFriend We want to make our chatbots smarter to make user conversations more fluid. Currently, our bots are limited to very restricted conversations and often misunderstand messages. In the future we want to train our robots to interact more appropriately with all sorts of prompts from the user while also staying true to their character. We would also like to successfully connect our product to a server and domain. This way, users could more easily access its contents.
## Inspiration We're all told that stocks are a good way to diversify our investments, but taking the leap into trading stocks is daunting. How do I open a brokerage account? What stocks should I invest in? How can one track their investments? We learned that we were not alone in our apprehensions, and that this problem is even worse in other countries. For example, in Indonesia (Scott's home country), only 0.3% of the population invests in the stock market. A lack of active retail investor community in the domestic stock market is very problematic. Investment in the stock markets is one of the most important factors that contribute to the economic growth of a country. That is the problem we set out to address. In addition, the ability to invest one's savings can help people and families around the world grow their wealth -- we decided to create a product that makes it easy for those people to make informed, strategic investment decisions, wrapped up in a friendly, conversational interface. ## What It Does PocketAnalyst is a Facebook messenger and Telegram chatbot that puts the brain of a financial analyst into your pockets, a buddy to help you navigate the investment world with the tap of your keyboard. Considering that two billion people around the world are unbanked, yet many of them have access to cell/smart phones, we see this as a big opportunity to push towards shaping the world into a more egalitarian future. **Key features:** * A bespoke investment strategy based on how much risk users opt to take on, based on a short onboarding questionnaire, powered by several AI models and data from Goldman Sachs and Blackrock. * In-chat brokerage account registration process powered DocuSign's API. * Stock purchase recommendations based on AI-powered technical analysis, sentiment analysis, and fundamental analysis based on data from Goldman Sachs' API, GIR data set, and IEXFinance. * Pro-active warning against the purchase of a high-risk and high-beta assets for investors with low risk-tolerance powered by BlackRock's API. * Beautiful, customized stock status updates, sent straight to users through your messaging platform of choice. * Well-designed data visualizations for users' stock portfolios. * In-message trade execution using your brokerage account (proof-of-concept for now, obviously) ## How We Built it We used multiple LSTM neural networks to conduct both technical analysis on features of stocks and sentiment analysis on news related to particular companies We used Goldman Sachs' GIR dataset and the Marquee API to conduct fundamental analysis. In addition, we used some of their data in verifying another one of our machine learning models. Goldman Sachs' data also proved invaluable for the creation of customized stock status "cards", sent through messenger. We used Google Cloud Platform extensively. DialogFlow powered our user-friendly, conversational chatbot. We also utilized GCP's computer engine to help train some of our deep learning models. Various other features, such as the app engine and serverless cloud functions were used for experimentation and testing. We also integrated with Blackrock's APIs, primarily for analyzing users' portfolios and calculating the risk score. We used DocuSign to assist with the paperwork related to brokerage account registration. ## Future Viability We see a clear path towards making PocketAnalyst a sustainable product that makes a real difference in its users' lives. We see our product as one that will work well in partnership with other businesses, especially brokerage firms, similar to what CreditKarma does with credit card companies. We believe that giving consumers access to a free chatbot to help them invest will make their investment experiences easier, while also freeing up time in financial advisors' days. ## Challenges We Ran Into Picking the correct parameters/hyperparameters and discerning how our machine learning algorithms will make recommendations in different cases. Finding the best way to onboard new users and provide a fully-featured experience entirely through conversation with a chatbot. Figuring out how to get this done, despite us not having access to a consistent internet connection (still love ya tho Cal :D). Still, this hampered our progress on a more-ambitious IOT (w/ google assistant) stretch goal. Oh, well :) ## Accomplishments That We Are Proud Of We are proud of our decision in combining various Machine Learning techniques in combination with Goldman Sachs' Marquee API (and their global investment research dataset) to create a product that can provide real benefit to people. We're proud of what we created over the past thirty-six hours, and we're proud of everything we learned along the way! ## What We Learned We learned how to incorporate already existing Machine Learning strategies and combine them to improve our collective accuracy in making predictions for stocks. We learned a ton about the different ways that one can analyze stocks, and we had a great time slotting together all of the different APIs, libraries, and other technologies that we used to make this project a reality. ## What's Next for PocketAnalyst This isn't the last you've heard from us! We aim to better fine-tune our stock recommendation algorithm. We believe that are other parameters that were not yet accounted for that can better improve the accuracy of our recommendations; Down the line, we hope to be able to partner with finance professionals to provide more insights that we can incorporate into the algorithm.
losing
## Inspiration As college students, we didn't know anything, so we thought about how we can change that. One way was by being smarter about the way we take care of our unused items. We all felt that our unused items could be used in better ways through sharing with other students on campus. All of us shared our items on campus with our friends but we felt that there could be better ways to do this. However, we were truly inspired after one of our team members, and close friend, Harish, an Ecological Biology major, informed us about the sheer magnitude of trash and pollution in the oceans and the surrounding environments. Also, as the National Ocean Science Bowl Champion, Harish truly was able to educate the rest of the team on how areas such as the Great Pacific Garbage Patch affect the wildlife and oceanic ecosystems, and the effects we face on a daily basis from this. With our passions for technology, we wanted to work on an impactful project that caters to a true need for sharing that many of us have while focusing on maintaining sustainability. ## What it does The application essentially works to allow users to list various products that they want to share with the community and allows users to request items. If one user sees a request they want to provide a tool for or an offer they find appealing, they’ll start a chat with the user through the app to request the tool. Furthermore, the app sorts and filters by location to make it convenient for users. Also, by allowing for community building through the chat messaging, we want to use ## How we built it We first, focused on wireframing and coming up with ideas. We utilized brainstorming sessions to come up with unique ideas and then split our team based on our different skill sets. Our front-end team worked on coming up with wireframes and creating designs using Figma. Our backend team worked on a whiteboard, coming up with the system design of our application server, and together the front-end and back-end teams worked on coming up with the schemas for the database. We utilized the MERN technical stack in order to build this. Our front-end uses ReactJS in order to build the web app, our back-end utilizes ExpressJS and NodeJS, while our database utilizes MongoDB. We also took plenty of advice and notes, not only from mentors throughout the competition, but also our fellow hackers. We really went around trying to ask for others’ advice on our web app and our final product to truly flush out the best product that we could. We had a customer-centric mindset and approach throughout the full creation process, and we really wanted to look and make sure that what we are building has a true need and is truly wanted by the people. Taking advice from these various sources helped us frame our product and come up with features. ## Challenges we ran into Integration challenges were some of the toughest for us. Making sure that the backend and frontend can communicate well was really tough, so what we did to minimize the difficulties. We designed the schemas for our databases and worked well with each other to make sure that we were all on the same page for our schemas. Thus, working together really helped to make sure that we were making sure to be truly efficient. ## Accomplishments that we're proud of We’re really proud of our user interface of the product. We spent quite a lot of time working on the design (through Figma) before creating it in React, so we really wanted to make sure that the product that we are showing is visually appealing. Furthermore, our backend is also something we are extremely proud of. Our backend system has many unconventional design choices (like for example passing common ids throughout the systems) in order to avoid more costly backend operations. Overall, latency and cost and ease of use for our frontend team was a big consideration when designing the backend system ## What we learned We learned new technical skills and new soft skills. Overall in our technical skills, our team became much stronger with using the MERN frameworks. Our front-end team learned so many new skills and components through React and our back-end team learned so much about Express. Overall, we also learned quite a lot about working as a team and integrating the front end with the back-end, improving our software engineering skills The soft skills that we learned about are how we should be presenting a product idea and product implementation. We worked quite a lot on our video and our final presentation to the judges and after speaking with hackers and mentors alike, we were able to use the collective wisdom that we gained in order to really feel that we created a video that shows truly our interest in designing important products with true social impact. Overall, we felt that we were able to convey our passion for building social impact and sustainability products. ## What's next for SustainaSwap We’re looking to deploy the app in local communities as we’re at the point of deployment currently. We know there exists a clear demand for this in college towns, so we’ll first be starting off at our local campus of Philadelphia. Also, after speaking with many Harvard and MIT students on campus, we feel that Cambridge will also benefit, so we will shortly launch in the Boston/Cambridge area. We will be looking to expand to other college towns and use this to help to work on the scalability of the product. We ideally, also want to push for the ideas of sustainability, so we would want to potentially use the platform (if it grows large enough) to host fundraisers and fundraising activities to give back in order to fight climate change. We essentially want to expand city by city, community by community, because this app also focuses quite a lot on community and we want to build a community-centric platform. We want this platform to just build tight-knit communities within cities that can connect people with their neighbors while also promoting sustainability.
**Elevate your foreign language fluency with tailored guidance and individualized verbal conversation practice on Ekko.** ## Inspiration Multilingualism is in. In today’s interconnected world, the ability to communicate in multiple languages is not only valuable, but imperative to personal and professional success. Whether it be conversing with business professionals during international commerce exchanges, preparing for standardized school language exams, serving as a journalist or diplomat on the international stage, or simply wanting to converse more fluently with your loved ones, everybody can benefit from conversing in a foreign language. However, traditional curriculum-based language learning methods rely on repetitive exercises and lack personalization, overlooking verbal fluency markers such as verbal precision, language variation, and personal goals. A key indication of an advanced speaker is the ability to engage in debates and convey subtle shades of meaning effectively, a skill that cannot be developed solely through the use of apps centered around memorization. As a 2nd generation Mandarin and Cantonese speaker, one of our team members realized firsthand how difficult it was to maintain fluency of foreign languages in university. Additionally, while visiting her grandmother in the ICU at a reputable San Francisco hospital, another team member noticed that it was frustrating—and potentially life threatening—for non-English speaking patients to communicate their needs to care providers since these care providers are not trained conversationally since they received very rudimentary academic foreign language training. Although these bilingual care providers were technically licensed to care for non-English speaking patients, most of them who learned the foreign language as a second language were unable to demonstrate spoken proficiency and cultural awareness outside of a classroom context. These experiences all led us to develop Ekko. Whether you’re a busy professional looking to enhance your global marketability or a student aiming to broaden your cultural horizons during study abroad, Ekko enables you to access verbal language practice anytime, anywhere, offering you the personalization and flexibility to learn at your own pace and develop as a global citizen. ## What it does Introducing **Ekko**: a personalized real time AI vocal chatbot that assesses your vocal language fluency. With Ekko, you only talk about what you actually want to talk about. Once you enter your basic onboarding information into Ekko such as your learning goals and interests, the app will then prompt you to a simple user interface where you can start your conversation. After each response, Ekko will then give personalized feedback on your conversational performance by catching your errors and providing you with an ACTFL based proficiency level. Conversations and feedback are personalized to your learning goals; for example, if you are using Ekko to prepare for career oriented work purposes, Ekko would generate prompts that you’d likely encounter in the workplace and the feedback would likely be centered around making your diction more formal. Similarly, if you were simply using Ekko to converse with friends and family, conversation topics and corrections provided by the chatbot would be more casual. Ekko saves your speaking errors and transfers those language specific content errors to tailor feedback to your language learning goals. For example, if you were to say *“me llamo es Cole”* as opposed to the correct version: *“me llamo Cole”*, Ekko would save that error to check in the future. Using this unique feature, Ekko also makes connections between your learning language of interest and languages you currently speak (inputted during the onboarding process), drawing parallels between the two. Similarly, if you were to consecutively respond with singular word responses, Ekko would suggest that you vary your sentence structure to maximize the effectiveness of the conversation. Unlike pre-existing language learning applications such as Duolingo, Ekko is *not* based on a curriculum, meaning that you take full reign of the conversation and practice. ## How we built it To make Ekko as capable as possible, we used a combination of many AI and machine learning technologies—most of which we had never used before. Because Ekko’s main value proposition is its conversational aspect, it was important that conversing with the platform is as natural as possible. This included using a state-of-the-art text-to-speech model, powered by ElevenLabs, as well as speech-to-text, powered by Deepgram. The combination of these two technologies made natural conversation on Ekko a seamless experience. Processing speed was also of utmost importance to us to make the conversations feel natural. Hence, the obvious choice for us was to power our backend using Bun. Specifically, we’re running an Elysia.js server to interface with our ML and large language models for incredibly fast performance. This strategic choice contributed to Ekko's impressive performance and responsiveness during interactions. Regarding large language models, Ekko chose to go full open-source thanks to Together.AI. We’re using the "NousResearch/Nous-Hermes-2-Yi-34B" model to generate responses from the AI agent, as well as "togethercomputer/m2-bert-80M-32k-retrieval" for text embeddings. These models were blazingly fast and out-performed the multitude of other models we tested for these purposes. To store the data we collected, we chose to use the Convex.dev platform. We’re leveraging their database and authentication services, as well as function calling and vector database. Using Convex enabled us to build a complex platform with many simultaneous and interconnected processes in such a limited time span. In order to classify the user’s proficiency, we built a text classification model using scikit learn. To train this model, we generated a synthetic dataset of hypothetical conversations that corresponded to specific ACTFL Proficiency guidelines using Together.AI’s "NousResearch/Nous-Hermes-2-Yi-34B" model. This model, hosted on GCP Vertex AI platform, enables us to specifically denote the user’s progress as they reach fluency. Altogether, Ekko's development is characterized by a comprehensive integration of state-of-the-art technologies. The emphasis on natural conversation, swift processing, open-source language models, efficient data handling through Convex.dev, and a proficiency-classifying text model collectively contribute to Ekko's prowess as an advanced conversational language learning platform powered by frontier tech. ## Business model In regards to our business model, we initially looked into adopting a Freemium model, but ultimately steered away from that inclination due to not wanting to exacerbate accessibility issues in the edtech space. For now, we intend for all Ekko features to be free of charge, and eventually rely on community partnerships and sponsorships with relatively small organizations such as out patient clinics in order to cover costs. In the future during the reiteration phase, we also plan on hosting a donation platform to raise money for our developing team, as well as to purchase technology in underprivileged schools so that students worldwide can use Ekko. We also want to look into partnerships with larger organizations that would benefit from improved language fluency services such as hotels and universities. ## Challenges we ran into One of the major challenges we encountered was finding an adequate fluency metric to score user responses. While percentages and other numerical metrics seemed like an obvious choice, this would also mean that the longer a user were to maintain a conversation (typically holding a longer conversation is a good thing when practicing foreign languages), the higher percent error they’d receive, thus deterring users from talking for longer periods of time. We eventually settled on the idea of using qualitative feedback based on the well-established ACTFL Language Speaking category rankings that contained specific comprehension and fluency requirements under each conversation difficulty level. The scoring would be based off the average ACTFL score of the five most recent responses provided. Our team also raised larger scale questions pertaining to stuttering and speech impediments, as many fluent speakers often naturally stutter while talking and the STT model could interpret that as lack of fluency. Moreover, usage of slang is also something that we need to look into a bit further, as the current system struggles to interpret colloquial vernacular. ## What we learned Through developing Ekko, we learned that different languages present different challenges with TTS that we must reconsider when building past our MVP. We also learned the major advantages that active conversation has over repetitive exercises when practicing a foreign language, as active conversation provides learners the opportunity to contextual learning in practical and authentic situations through immediate correction while repetitive exercises solely focus on reinforcing specific patterns and foundational drills and lack spontaneity of real-life language. ## What's next for Ekko During our next iteration of Ekko, we hope to also launch a real-time typing version of our personalized chatbot that would simulate your ideal interlocutor in both content and formality. We are also looking to implement a feature that encourages users to utilize figurative language in their speech. For example, if the user were using Ekko to improve their English proficiency and they told our chatbot "*It’s raining very hard outside,*” Ekko would highlight that sentence and perhaps suggest: “*It’s raining cats and dogs*” or the more casual “*It’s pouring.*" Another feature we are looking to implement post-MVP is a time suggestion feature, as it is a useful skill to show a comprehensive understanding of the other person’s input while also keeping responses pertinent and cutting off unnecessary fluff. This feature would especially come in handy to those preparing for professional interviews. Now more on Ekko’s social component. Our team would like to develop a social component to Ekko integrating a gamified social element that allows students to build profiles, connect with other students, and compare streaks with friends. We would also consider gamifying the conversations with fun interaction challenge modes simulating Heads Up or Hot Seat. This would not only incentivize users to practice their verbal communication even more, but would also be especially helpful to those using Ekko to prepare for less formal conversational settings. In regards to getting the word about Ekko out there, our team would launch a guerilla marketing campaign, pushing out content on all social media platforms and attending in-person hackathons and conventions to get initial feedback during beta testing. In addition to partnerships with small clinics and healthcare providers, we would also gradually partner with more outside organizations such as university residential housing and career services, refugee councils helping young asylum seekers, and larger international corporations to implement Ekko into their daily regimens. Lastly, we would also like to launch a donation platform to support the developing team and generate donations for tech for underprivileged schools so they can continue using Ekko. ## Ethical Discussion First and foremost, the responsible use of large language models (LLMs) has engendered immense ethical debate, as they can inadvertently perpetuate representation biases from the data used to train them, thereby amplifying existing linguistic and cultural prejudice. Thus, it is imperative that developers—especially developers of language learning applications—stress the importance of cultural sensitivity, empathy, and respect for linguistic diversity in language learning communities. LLMs also pose security risks that must be mitigated through robust cybersecurity measures, and as with any web application, concerns regarding data privacy and security raise concerns about the safeguarding of personal user information within educational platforms. However, we see Ekko being a safer alternative to similar platforms such as TalkAbroad, as users are chatting verbally with our chatbot rather than on live video call with an individual they are unfamiliar with. Additionally, another key factor in promoting inclusivity in educational technology is ensuring accessibility to technologies and devices. Since we intend on Ekko being used in underserved school communities where access to devices is not always guaranteed, in the future, we want to collect donations and sponsors to purchase devices for underprivileged schools so they can continue using Ekko. Furthermore, current speech-to-text platforms overlook individuals impacted by speech impediments, creating potential barriers to participation. Through rounds of reiteration and beta testing, we hope to eventually develop a version of Ekko that accounts for individuals with speech and learning disabilities. Additionally, the development of Ekko could financially impact those who rely on virtual conversation exchange provider services such as TalkAbroad for supplementary income. Lastly, monetization strategies and pricing models such as the popular Freemium model can exacerbate educational inequities and access. Though our team has collectively decided to offer all our services free of charge, that does put us at a stalemate when discussing how we will monetize. Although the potential positive impact of Ekko is rife, it is still crucial that we are diligent in navigating these complexities. It is imperative for us to address these issues conscientiously, ensuring that educational language fluency technologies remain accessible, equitable, and respectful of diverse linguistic and cultural backgrounds. ## Citations <https://arxiv.org/pdf/2307.06435.pdf>
## Inspiration As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare. Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers. ## What it does greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria. ## How we built it Designs in Figma, Bubble for backend, React for frontend. ## Challenges we ran into Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.) ## Accomplishments that we're proud of Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners! ## What we learned In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project. ## What's next for greenbeans Lots to add on in the future: Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches. Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
partial
## Inspiration We've always thought it would be fun to run our own coffee shop. However, the barriers to opening our own shop are too high at the moment, specifically because we don't have enough funding to lease a space for an entire year. It would be so much easier and less financially risky if we could rent out space during the day in another food/drink business that operates primarily at night, such as a bar. This is why we're creating Rently. Rently allows entrepreneurs to pursue their passions in the food and drink industry without having to make too big of a financial commitment, while also helping business owners maximize their store's potential by allowing them to earn passive income during off hours/days. ## What it does If you're an aspiring cafe owner, private chef who wants to serve the public occasionally, or group of bartenders who want to set up great parties on the weekends, Rently is the best app to find a venue. Rently allows business owners to rent out their stores (including restaurants, cafes, bars, and more) to another person or business when they are closed. It's easy; to create a listing, simply add details such as address, daily rate, type of venue, square feet, an amazing picture to show off your store, and short description. Renters can see your listing, along with others, on the 'Properties' page and filter to easily find what they're looking for based on size, price, or conveniently browse by location using the search feature. ## How we built it ReactJS, ExpressJS, NodeJS, MongoDB, Google Maps API, Stripe API (and designs in Figma) ## Challenges we ran into During ideation it took us awhile to land on the right idea. We knew we wanted to address the Otsuka x Valuenex prompt of how to bring people together through food through more in-person connections. ## Accomplishments that we're proud of We're proud of creating seamless processes throughout the entire user experience, for both our renters and our listers. Second, we're proud that businesses will reduce their carbon footprint by renting out spaces that would otherwise go unused. Third, we're excited that Rently will bring communities together by sharing common spaces and creating more brick and mortar business. We're proud that one of the core values of this app is to empower small/independent owners to explore their passions and take their businesses to the next level. ## What we learned One member learned how to use the Google Maps API for the first time and the other member got more frontend experience/used Typescript for the first time. ## What's next for Rently We plan to continue building and iterating on Rently to best serve business owners and entrepreneurs everywhere.
## Inspiration The process of renting is often tedious, repetitive, and exhausting for both renters and landlords. Why not make it efficient, fun, and enjoyable instead? For renters, no more desperately sifting through Marketplace, Craigslist, and shooting messages and applications into an abyss. For landlords, no more keeping track of prospective tenants' references and rent history through a series of back-and-forth messages. ## What it does Rent2Be is a mobile application that borrows from Tinder's iconic concept of swiping left and right, effectively streamlining the renting process for both renters and landlords. Find your perfect match, truly **rent to be**! From the renter's perspective, we query their potential matches based on their preferences (e.g. budget, location, move-in date, lease length, beds/baths, amenities, commute preferences, etc.). In their feed, renters can then swipe right for listings they're interested in and left for listings that don't fit their criteria. On the other end, landlords can create their listings for the current rental database. The landlord's feed will be populated with the profiles of renters interested in that particular listing - if the landlord also swipes right on the renter, it's a match! Upon matching, the pair will have an open chat session for further discussion and access to additional tenant details such as reference contact information. ## How we built it * The frontend is built with React Native and expo. * Our backend is powered by CockroachDB Serverless with "global locality" rows. * The UI/UX design is created with Figma. ## Challenges we ran into * Dependency discrepancies across each of our commits would occasionally lead to merge conflicts. * Integrating CockroachDB for the first time. * Considering the privacy and security risks of an app that handles highly confidential information (e.g. occupation, salary, credit score, etc.). ## Accomplishments that we're proud of * With CockroachDB we're setting a “global” locality for low-latency reads and high-latency writes. We saw this as a key benefit given there are significantly more read operations (repeated viewings of each profile/listing) than write operations (creation of these profiles/listings), there are typically more renters than landlords. We may sacrifice fewer users for greater users, though this provides an overall better user experience for all types of users (without excluding those that are regionally further, such as internationals that may be doing market research before immigration or moving abroad). * Using the benefits of serverless CockroachDB to automatically scale (and shard) for more popular geographic reasons. We would have to monitor and perform this manually otherwise. * Leveraging CockroachDB's details and integrations to enhance user experience and minimize engineer efforts where possible. * Creating a UI design that balances fun, energetic vibes with a professional, trustworthy feeling. ## What we learned * Testing live on our mobile devices with expo and React Native * How to set up and use CockroachDB Serverless, creating clusters and importing data in the database * How to throw Water-LOO gxng signs (via will.i.am) ## What's next for Rent2Be There are so many features in the future of Rent2Be! A big part of the renting process is the viewings - Rent2Be renters will be able to book an appointment as soon as there is a match. Landlords will fill in their calendar availability ahead of time and renters will be able to book directly in-app as soon as there's a match. Community feedback is crucial and often a great source for making decisions - as such, Rent2Be will also have features for leaving reviews on both renters and landlords. Whether it's a review from previous tenants on the apartment listed by a landlord or a space for landlords to leave their tenants a reference letter once their lease ends, feedback from both ends will help to improve the overall user experience. Rent2Be also has the potential to handle areas such as payment. Since this app already aims to mitigate the inefficiencies of the current rent processes, combining pay onto one platform will only make lives easier. This would also be an opportunity to work with payment or banking APIs as well as Security practices.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
losing
## Inspiration We wanted to build a user-friendly app that allows users from diverse backgrounds to explore Morse Code. ## What it does Supports translation to and from Morse code. ## How we built it Android studio ## Challenges I ran into Working with the Android Studio camera API and ensuring consistent result by synchronizing the cameras ## Accomplishments that we are proud of Applying what we learned about Android app development and building something fun and practical. ## What we learned How to HACK, how to GIT, how to use Android Studio, how to explore different API's, and most importantly, how to work in a team to achieve a common goal ## What's next for Morsify Ideally we would also like to support sound detection and translate audio signals into Morse code.
## Inspiration Our inspiration was accessibility of language learning around the globe. We wanted to create an app that would allow the user to scan an item into the app through the camera, allowing the user to learn about the objects right in front of them. We also planned on mapping routes for people to follow to learn a specific set of vocabulary, like foliage or shops. Our main goal was to make integrating with a language barrier into a community easier. ## What it does This app can take photos or use pictures from the camera roll on a phone and locate and label the objects depicted within the picture. The app would also be able to tell the user how to pronounce the word on the screen. ## How we built it We used tools from Microsoft's vision API to detect objects & indicate what they were to users. ## Challenges we ran into Turning python code into javascript was a challenge for our group, and implementing the API software in javascript was also a challenge. We tried multiple different API softwares for vision and text to speech and could not get most of them to work. ## Accomplishments that we're proud of We were able to implement the Microsoft vision API affectively after attempting to use multiple different devices. There were many challenges to overcome on a team of young hackers learning how to use API for the first time. We were absolutely thrilled when we got it to work, and could start making progress towards our goals. ## What we learned We learned how to work together to overcome challenges and to utilize the resources that we were given. We went to many workshops and reached out to mentors whenever we needed help. ## What's next for Words of the World We hope to implement a text to voice API to allow people to hear the words spoken to them in their target language. We also hope to eventually put the software into an app that will take pictures and access the camera roll.
## Inspiration ``` We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do. ``` ## What it does ``` Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams. ``` ## How we built it ``` We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application. ``` ## Challenges we ran into ``` This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application! ``` ## What we learned ``` We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers. ``` ## What's next for Discotheque ``` If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music. ```
losing
### Refer this (<https://youtu.be/Ne9Xw_kj138>) for intro and problem statement. # Brief ### Features 1. Automatic essay grading 2. Facial recognition 3. Text detection from image **It's becoming harder for teachers to mark hundreds of students work within their limited free time hours that they should be using for leisure. It was reported in March 2021 that 84% of teachers feel stressed, which is a shocking realization when these are the people who are supposed to be comforting and teaching the next generation. This is why we created EduMe.ai** --- ![Logo](https://i.imgur.com/rY5IDv7.jpg) This project is especially useful as it allows for moderated grades throughout schools without any bias. Therefore, if needed it is an effective tool to be used if homework based assessment grades need to be assigned. --- # What is EduMe.ai **EduMe.ai** is a social media-based application that aims to connect students and reduce workload for teachers. We identified our problem as teachers being overstressed in their work-life through increasingly complex homework load as well as limited work-life balance. Therefore, we wanted to solve this. We do this by using AI to mark student's homework as well as invigilate online tests. In addition to this, we have created a platform that allows students to communicate privately and share public posts of their work, lives or interests. --- # Step by step # Student 1. Log in with your university id. ![](https://i.imgur.com/2eIPh1v.png) 2. Scan and submit your essay. ![](https://i.imgur.com/xnuenx7.png) 3. Attend Online viva voice test ![](https://i.imgur.com/jRAMxPh.png) 4. Get notification whenever your classmate sends new message . ![](https://i.imgur.com/3NxDRcR.png) 5. Share your work with your classmate. ![](https://i.imgur.com/ici91j4.png) 6. Publish your work or grades in social portal. ![](https://i.imgur.com/p8Pw2qE.png) --- # Teacher 1. See all students and their work assigned in your portal. ![](https://i.imgur.com/WU3FvSe.png) 2. Assign them essay to write on specific topic. ![](https://i.imgur.com/FBCvQX6.png) 3.Use grade assigned by computer (Neural network) or grade manually. ![](https://i.imgur.com/rpLUqbv.png) 3. Assign questions for their viva-voice test. ![](https://i.imgur.com/wVvXhpL.png) --- # Automatic essay grading Essays are paramount for of assessing academic excellence along with linking the different ideas with the ability to recall but are notably time-consuming when they are assessed manually. Manual grading takes a significant amount of evaluator's time and hence it is an expensive process. Artificial Intelligence systems provide a lot to the educational community where graders have to face different kinds of difficulties while rating student writings. Analyzing student essays in abundance within given time limit, along with feedback is a challenging task. But with changing times, human-written (not handwritten) essays are easy to evaluate with the help of AEG systems. # Facial recognition Face detection using Haar cascades is a machine learning-based approach where a cascade function is trained with a set of input data. OpenCV already contains many pre-trained classifiers for face, eyes, smiles, etc.. Today we will be using the face classifier. You can experiment with other classifiers as well. # Text detection from image We used google cloud Vision API that can detect and extract text from images. There are two annotation features that support optical character recognition (OCR). --- # Creation Process # UI/UX To start our project properly, we decided to create a rough plan of what we wanted and where in order to visualize the outcome of the project. Here are a few pictures of what we designed using Figma. # Frames ![BB](https://i.imgur.com/GuQuDPo.jpg) # Visual Designs ![](https://i.imgur.com/FIJ7dit.png) --- # How are we a social media application? **The Google definition of social media "websites and applications that enable users to create and share content or to participate in social networking."** We designed our application in a way that allows students to connect through an experience they mutually share - school. We would class our project as social media as it does allow students to talk and spark conversations whilst, having the freedom to post whatever they want that entails their education. # How does this impact society? Teachers are arguably the largest group of individuals who make social change. However, with their mental health declining and education gradually becoming harder and more competitive, efficiency and productivity are just not the same as they used to be. We hope, to bring back this productivity via taking work of teacher's hands and creating a centralized place for marking and moderated communication between students. --- # Our Key Takeaways ### Technologies that we used : ![Languages](https://i.imgur.com/qxPwEfz.png) ### Accomplishments that we're proud of We are happy that we were able to complete this highly complex project within the limited time-space. We truly believe that our project has huge potential to create a new era of education that helps teachers with their work-life balance as well as helping students to give advice to each other and help each other out. ### What we learned We have learned that communication is key, when undertaking a huge project such as this. ### What's next for EduMe.ai Our system has a lot of versatility but to even start effectively in the future, we plan to implement our system virtually into small schools to see its effects on student progress as well as teacher's mental health. We also plan to make our application a safer place by filtering comments to make sure that no bullying or rude language takes place as it is a tool that is made for school children. --- ## References 1. <https://github.com/mankadronit/Automated-Essay--Scoring> 2. iOS assets on Figma: <https://www.figma.com/file/ne0DGAm1tBVYegnXhD5NO7/Educreate.ai-Student-View> 3. iOS assets on Figma : <https://www.figma.com/file/qOOrCUIJck5biWzXs0651L/Untitled?node-id=0%3A1> ---
## Inspiration **Students** will use AI for their school work anyway, so why not bridge the gap between students and teachers and make it beneficial for both parties? **1** All of us experienced going through middle school, high school, and now college surrounded by AI-powered tools that were strongly antagonizedantagonized in the classroom by teachers by teachers. As the prevalence of AI and technology increases in today’s world, we believe that classrooms should embrace AI to enhance the classroom which acts very parallel to when calculators were introduced to the classroom. Mathematicians around the world believed that calculators would stop math education all-together, but instead it enhanced student education allowing higher level math such as calculus to be taught earlier. Similarly, we believe that with the proper tools and approach AI can enhance education and teaching for both teachers and students. . **2** In strained public school systems where the student-to-teacher ratio is low, such educational models can make a significant difference in a young student’s educational journey by providing individualized support when a teacher can’t with information specific to their classroom. One of our members who attends a Title 1 high school particularly inspired this project. **3** Teachers are constantly seeking feedback on how their students are performing and where they can improve their instruction. What better way to receive this direct feedback than machine learning analysis of the questions students are asking specifically about their class, assignments, and content? We wanted to create a way for AI model education support to be easily and more effectively integrated into classrooms especially for early education, providing a controlled alternative to using already existing chat models as the teacher can ensure accurate information about their class is integrated into the model. ## What it does Students will use AI for their school work anyway, so why not bridge the gap between students and teachers? EduGap, a Chrome Extension for Google Classroom, enhances the AI models students can use by automating the integration of class-specific materials into the model. Teachers benefit from gaining machine learning analytics on what areas students struggle with the most through the questions they ask the model. ## How we built it Front End: Used HTML/CSS to create deploy a 2-page chrome extension 1 page features an AI chatbot that the user can interact with The second page is exclusively for teacher users who can review trends from their most asked prompts Back End: Built on Javascript and python scripts Created custom api endpoints for retrieving information from the Google Classroom API, Google User Authentication, prompting Gemini via Gemini API, Conducting Prompt Analysis Storage and vector embeddings were created using Chroma DB for the Student Experience AI/ML LLM: Google Gemini 1.5-flash ChromaDB for vector embeddings and semantic search as it relates to google classroom documents/information Langchain for vector embeddings as it relates to prompts; DBSCAN algorithm to develop clusters for the embeddings via Sklearn using PCA to downsize dimensionality via sklearn General themes of largest cluster are shared with teacher summarized by Gemini ## Challenges we ran into We spent a significant portion of our time trying to integrate sponsor technologies with our application as resources on the web are sparse and some of the functionalities are buggy. It was a frustrating process but we eventually overcame it by improvising. We also spent some time to choose the best clustering method for our project, and hyperparameter tuning in the constrained time period was also highly challenging as we had to create multiple scripts to cater for different types of models to choose the best ones for our use case ## Accomplishments that we're proud of Creating a fully functioning Chrome Extension linked to Google Classroom while integrating multiple APIs, machine learning, and database usage. Working with a team we formed right at the Hackathon! ## What we learned We learned how to work together to create a user-friendly application while integrating a complex backend. For most of us, this was our first hackathon so we learned how to learn fast and productively for the techniques, technology, and even languages we were implementing. ## What's next for EduGap **1** Functionality for identifying and switching between different classes. **2** Handling separate user profiles from a database perspective **3** A more comprehensive analytic dashboard and classroom content suggestion for teachers + more personalized education support tutoring according to the class content for students. **4** Pilot programs at schools to implement! **5** Chrome Extension Deployment **6** Finalize Google Classroom Integration and increase file compatibility
## Inspiration: We thought that it takes a lot of time for teachers and professors to grade hundreds of assignments and we wanted to make it less time-consuming so that they can use this time doing something more efficient. ## What it does: We have a professor view homepage where they submit the answer key and we have a student view page where students can submit their assignments. We take both submissions and compare them together to grade them. ## How we built it: We created the design for the website using html and css. We used javascript to get information from the the student and teacher submission. We used python to translate images into text and make it easier to compare the answer key to the student submission. We used flask to connect javascript and python. ## Challenges we ran into: We didn't know how to connect the answer key and the student submission at the beginning to save it to a database. It was also challenging to transform the images into text and make sure that it's accurate and doesn't grade falsely. ## Accomplishments that we're proud of: We are proud for using many languages and connecting them together to make the final product. We were able to always discuss any challenges we faced and how we think we should approach the problem. We had the same energy and motivation that we started with. ## What we learned: We learned how to use flask and how to transform images into text using javascript. We also learned that the only way to go through this was to always discuss together what we like and what we don't like to make sure we're on the same page/ ## What's next for GradeCam We want to make this into an app into the future. With the time we had, we were only able to create the website version. An app would be more accessible to everyone. Our mission is to help as many professors as we can since they have to do so many things for hundreds of students and it can get overwhelming. We want to make GradeCam globally accessible too so that students and professor's from around the world would be able to use it.
partial
## Inspiration Let’s face it: getting your groceries is hard. As students, we’re constantly looking for food that is healthy, convenient, and cheap. With so many choices for where to shop and what to get, it’s hard to find the best way to get your groceries. Our product makes it easy. It helps you plan your grocery trip, helping you save time and money. ## What it does Our product takes your list of grocery items and searches an automatically generated database of deals and prices at numerous stores. We collect this data by collecting prices from both grocery store websites directly as well as couponing websites. We show you the best way to purchase items from stores nearby your postal code, choosing the best deals per item, and algorithmically determining a fast way to make your grocery run to these stores. We help you shorten your grocery trip by allowing you to filter which stores you want to visit, and suggesting ways to balance trip time with savings. This helps you reach a balance that is fast and affordable. For your convenience, we offer an alternative option where you could get your grocery orders delivered from several different stores by ordering online. Finally, as a bonus, we offer AI generated suggestions for recipes you can cook, because you might not know exactly what you want right away. Also, as students, it is incredibly helpful to have a thorough recipe ready to go right away. ## How we built it On the frontend, we used **JavaScript** with **React, Vite, and TailwindCSS**. On the backend, we made a server using **Python and FastAPI**. In order to collect grocery information quickly and accurately, we used **Cloudscraper** (Python) and **Puppeteer** (Node.js). We processed data using handcrafted text searching. To find the items that most relate to what the user desires, we experimented with **Cohere's semantic search**, but found that an implementation of the **Levenshtein distance string algorithm** works best for this case, largely since the user only provides one to two-word grocery item entries. To determine the best travel paths, we combined the **Google Maps API** with our own path-finding code. We determine the path using a **greedy algorithm**. This algorithm, though heuristic in nature, still gives us a reasonably accurate result without exhausting resources and time on simulating many different possibilities. To process user payments, we used the **Paybilt API** to accept Interac E-transfers. Sometimes, it is more convenient for us to just have the items delivered than to go out and buy it ourselves. To provide automatically generated recipes, we used **OpenAI’s GPT API**. ## Challenges we ran into Everything. Firstly, as Waterloo students, we are facing midterms next week. Throughout this weekend, it has been essential to balance working on our project with our mental health, rest, and last-minute study. Collaborating in a team of four was a challenge. We had to decide on a project idea, scope, and expectations, and get working on it immediately. Maximizing our productivity was difficult when some tasks depended on others. We also faced a number of challenges with merging our Git commits; we tended to overwrite one anothers’ code, and bugs resulted. We all had to learn new technologies, techniques, and ideas to make it all happen. Of course, we also faced a fair number of technical roadblocks working with code and APIs. However, with reading documentation, speaking with sponsors/mentors, and admittedly a few workarounds, we solved them. ## Accomplishments that we’re proud of We felt that we put forth a very effective prototype given the time and resource constraints. This is an app that we ourselves can start using right away for our own purposes. ## What we learned Perhaps most of our learning came from the process of going from an idea to a fully working prototype. We learned to work efficiently even when we didn’t know what we were doing, or when we were tired at 2 am. We had to develop a team dynamic in less than two days, understanding how best to communicate and work together quickly, resolving our literal and metaphorical merge conflicts. We persisted towards our goal, and we were successful. Additionally, we were able to learn about technologies in software development. We incorporated location and map data, web scraping, payments, and large language models into our product. ## What’s next for our project We’re very proud that, although still rough, our product is functional. We don’t have any specific plans, but we’re considering further work on it. Obviously, we will use it to save time in our own daily lives.
## Inspiration Among our group, we noticed we all know at least one person, who despite seeking medical and nutritional support, suffers from some unidentified food allergy. Seeing people struggle to maintain a healthy diet while "dancing" around foods that they are unsure if they should eat inspired us to do something about it; build **BYTEsense.** ## What it does BYTEsense is an AI powered tool which personalizes itself to a users individual dietary needs. First, you tell the app what foods you ate, and rate your experience afterwards on a scale of 1-3 - The app then breaks down the food into its individual ingredients, remembers how your experience with them, and stores them to be referenced later. Then, after a sufficient amount of data has been collected, you can use the app to predict how **NEW** foods can affect you through our "How will I feel if I consume..." function! ## How we built it The web app consists of two main functions, the training and the predicting functions. The training function was built beginning with the receiving of a food and an associated rating. This is then passed on through the OpenAI API to be broken apart to its individual ingredients through ChatGPT's chatting abilities. These ingredients, and their associated ratings, are then saved onto an SQL database which contains all known associations to date. **Furthermore**, there is always a possibility that two different dishes share an ingredient, but your experience is fully different! How do we adjust for that? Well naturally, that would imply that this ingredient is not the significant irritator, and we adjust the ratings according to both data points. Finally, the prediction function of the web app utilizes Cohere's AI endpoints to complete the predictions. Through use of Cohere's classify endpoint, we are able to train an algorithm which can classify a new dish into any of the three aforementioned categories, with relation to the previously acquired data! The project was all built on Replit, allowing for us to collaborate and host it all in the same place! ## Challenges we ran into We ran into many challenges over the course of the project. First, it began with our original plan of action being completely unusable after seeing updates to Cohere's API, effectively removing the custom embed models for classification and rerank. But that did not stop us! We readjusted, re-planned, and kept on it! Our next biggest problem was the coders nightmare, a tiny syntax error in our SQLite code which continuously that crashed our entire program. We spent over an hour locating the bug, and even more trying to figure out the issue (it was a wrong data type.). And our final immense issue came quite literally out of the blue, previously, we utilized Cohere's new Coral chatbot to identify ingredients in the various input, but, due to an apparent glitch in the responses - we got our responses sent over 15 times each prompt - we had made a last minute jump to OpenAI! Once we got past those, most other things seemed like a piece of cake - there were a lot of pieces - but we're happy to present the finished product! ## Accomplishments we are proud of: There are many things that we as a team are proud of, from overcoming trials and tribulations, refusing sleep for nearly two days, and most importantly, producing a finished product. We are proud to see just how far we have come, from having no idea how to even approach LLM, to running a program utilizing **TWO** different ones. But most importantly, I think we are all proud of creating a product that really has potential to help people, using technology to better people's lives is something to be very proud of doing! ## What we learned: What did we learn? Well, that depends who you ask! I feel like each member of the team learnt an unbelievable amount, whether it be from each other or individually. For instance, I learnt a lot about flask and front end development from working with a proficient teammate, and I hope I gave something to learn from too! Even more so, throughout the weekend we attended many workshops, ranging from ML, to LLM, Replit, and so many others, that even if we didn't use what we learnt there in this project, I have no doubt it will appear in a next! ## What’s next for BYTEsense: All of us in the team honestly believe that BYTEsense has reached a level which is not only functional, but viable. As we keep on going all that is left is tidying up and cleaning some code and a potentially market ready app could be born! Who knows, maybe we'll be a sponsor one day! But either way, I am definitely using a copy when I get back home!
## Inspiration We have two bitcoin and cryptocurrency enthusiasts on our team, and only of us made money during its peak earlier this month. Cryptocurrencies are just too volatile, and its value depends too much on how the public feels about it. How people think and talk about a cryptocurrency affects it price to a large extent, unlike stocks which also have the support of the market, shareholders and the company itself. ## What it does Our website scrapes for thousands of social media posts and news articles to get information about the required cryptocurrency. We then analyse it using NLP and ML and determine whether the price is likely to go up or down in the very near future. We also display the current price graphs, social media and news trends (if they are positive, neutral or negative) and the popularity ranking of the selected currency on social platforms. ## How I built it The website is mostly built using node.js and bootstrap. We use chart.js for a lot of our web illustrations, as well as python for web scraping, performing sentimental analysis and text processing. NLKT and Google Cloud Natural Language API were especially useful with this. We also stored our database on firebase. **Google Cloud**: We used firebase to efficiently store and manage our database, and Google Cloud Natural Language API to perform sentimental analysis on hundreds of social media posts efficiently. ## Challenges I ran into It was especially hard to create, store and process the large datasets we made consisting of social media posts and news articles. Even though we only needed data from the past few weeks, it was a lot since so many people post online. Getting relevant data, free of spam and repeated posts, and actually getting useful information out of it was hard. ## Accomplishments that I'm proud of We are really proud that we were able to connect multiple streams of data, analyse them and display all relevant information. It was amazing to see when our results matched the past peaks and crashes in bitcoin price. ## What I learned We learned how to scrape relevant data from the web, clean it and perform sentimental analysis on it to make predictions about future prices. Most of this was new to our team members and we definitely learned a lot. ## What's next for We hope to further increase the functionality of our website. We want users to have an option to give the website permission to automatically buy and sell cryptocurrencies when it determines it is the best time to do so. ## Domain name We bought the domain name get-crypto-insights.online for the best domain name challenge since it is relevant to our project. If I found a website of this name on the internet, I would definitely visit it to improve my cryptocurrency trading experience. ## About Us We are Discord team #1, with @uditk, @soulkks, @kilobigeye and @rakshaa
partial
## Inspiration The inspiration for this project came from the fact that all of us would send each other pictures of our outfits in the morning, asking "thoughts on this fit?" We thought there was a better way to do this. ## What it does A user can upload his/her fit to our platform, and get feedback from other users on the fit in the form of likes and dislikes. It's a very simple idea that can save a lot of time. ## How we built it We ran everything with a node backend and a Mongo database. The front end was written simply in bootstrap. ## Challenges we ran into It was hard to store the likes and dislikes and pull that data on every refresh. Storing images was also much more challenging than we anticipated. ## What we learned ## What's next for FitPick We feel as if there is a way to expand FitPick from simply 'likes' and 'dislikes'. Perhaps there could be a way that users could \_ critique \_ other users' fits in a very simple way. Or maybe we ourselves could build a model that would suggest new items based on a users' tastes. All in all, I see FitPick turning into a virtual market place of some sort.
## Inspiration We wanted to find a good way to make fashion more accessible and eco-friendly at the same time. The fast fashion industry generates a lot of packaging and clothing waste, so we wanted to find a more sustainable alternative to conventional online shopping. ## What it does Our site allows the user to browse items sold at local thrift shops as well as those posted for sale by another user. We included a weather check feature that suggests appropriate clothing for the season and weather. Finally, there is a Mix and Match tab that allows the user to build the perfect outfit using clothes from local vendors. ## How we built it Thrift the Fit is hosted on Firebase and a large portion is coded in C#. Building off a website template, we added pages and functionality in HTML/CSS and JavaScript. ## Challenges we ran into The carousel feature of the Mix and Match page was particularly difficult to implement. We also struggled to enable the geolocator for the weather check feature. ## Accomplishments that we're proud of The carousel viewer was difficult to get right, so getting that right was particularly satisfying. In addition, integrating the multiple parts of the web app took some testing and constant tweaking, so the whole site working together is a welcome sight :) ## What's next for Thrift the Fit We plan to add some more options for the individual user. We would implement user accounts with simple password protection. Each user will be able to add their own personal wardrobe to painlessly choose an outfit for any occasion.
## Inspiration Our team was united in our love for animals, and our anger about the thousands of shelter killings that happen every day due to overcrowding. In order to raise awareness and educate others about the importance of adopting rather than shopping for their next pet, we framed this online web application from a dog's perspective of the process of trying to get adopted. ## What it does In *Overpupulation,* users can select a dog who they will control in order to try to convince visitors to adopt them. To illustrate the realistic injustices some breeds face in shelters, different dogs in the game have different chances of getting adopted. After each rejection from a potential adoptee, we expose some of their faulty reasoning behind their choices to try to debunk false misconceptions. At the conclusion of the game, we present ways for individuals to get involved and support their local shelters. ## How we built it This web application is built in Javascript/JQuery, HTML, and CSS. ## Accomplishments that we're proud of For most of us, this was our first experience working in a team coding environment. We all walked away with a better understanding of git, the front-end languages we utilized, and design. We have purchased the domain name overpupulation.com, but are still trying to work through redirecting issues. :)
losing
## Inspiration With the effects of climate change becoming more and more apparent, we wanted to make a tool that allows users to stay informed on current climate events and stay safe by being warned of nearby climate warnings. ## What it does Our web app has two functions. One of the functions is to show a map of the entire world that displays markers on locations of current climate events like hurricanes, wildfires, etc. The other function allows users to submit their phone numbers to us, which subscribes the user to regular SMS updates through Twilio if there are any dangerous climate events in their vicinity. This SMS update is sent regardless of whether the user has the app open or not, allowing users to be sure that they will get the latest updates in case of any severe or dangerous weather patterns. ## How we built it We used Angular to build our frontend. With that, we used the Google Maps API to show the world map along with markers, with information we got from our server. The server gets this climate data from the NASA EONET API. The server also uses Twilio along with Google Firebase to allow users to sign up and receive text message updates about severe climate events in their vicinity (within 50km). ## Challenges we ran into For the front end, one of the biggest challenges was the markers on the map. Not only, did we need to place markers on many different climate event locations, but we wanted the markers to have different icons based on weather events. We also wanted to be able to filter the marker types for a better user experience. For the back end, we had challenges to figure out Twilio to be able to text users, Google firebase for user sign-in, and MongoDB for database operation. Using these tools was a challenge at first because this was our first time using these tools. We also ran into problems trying to accurately calculate a user's vicinity to current events due to the complex nature of geographical math, but after a lot of number crunching, and the use of a helpful library, we were accurately able to determine if any given event is within 50km of a users position based solely on the coordiantes. ## Accomplishments that we're proud of We are really proud to make an app that not only informs users but can also help them in dangerous situations. We are also proud of ourselves for finding solutions to the tough technical challenges we ran into. ## What we learned We learned how to use all the different tools that we used for the first time while making this project. We also refined our front-end and back-end experience and knowledge. ## What's next for Natural Event Tracker We want to perhaps make the map run faster and have more features for the user, like more information, etc. We also are interested in finding more ways to help our users stay safer during future climate events that they may experience.
## Inspiration We were interested in disaster relief for those impacted by Hurricances like Dorian and Maria for people that don't know what areas are affected and for first responders that don't know what infrastructure works are damaged and can't deliver appropriate resources in time. ## What it does This website shows the location of the nearest natural disasters. ## How we built it We used Amazon API and Google Cloud and Maps API and Python and Java Script. ## Challenges we ran into We have not been to a hackathon before so we weren't sure about how in-depth or general our problem should be. We started with an app that first responders can use during a natural disasters to input vitals ## Accomplishments that we're proud of A website that can map the GPS locations of flood data that we are confident in and a uniquely trained model for urban flooding. ## What we learned We learned about Google Cloud APIs, AWS S3 Visual Recognition Software and about how to operate in a hackathon. ## What's next for Crisis Apps
## Inspiration Looking around at the younger generations can be saddening. Everyone is so attached to their phones, needing to be connected. Needing to update their status, needing to update their followers, needing to send questionable images on SnapChat. Being **so** connected can remove you from reality and all that it has to offer. If we'd just put down our phones for only a moment, we'd all see how awesome lasers are. They're like seriously cool. ## What it does So we set out trying to make a file transfer mechanism that sends files via lasers. I know, COOL, RIGHT? Well, that didn't really work so we ended up making an instant messaging platform via lasers. Until that didn't work either so we made a "one-way, only really short sentences" transmitter via lasers. ## How we built it Lasers. ## Challenges we ran into Turns out laser based file transfers are not as practical as you'd think. A+ for style points, but it takes about a minute to send a word. And the receiver has to at least be within a line of sight and close enough that light dispersion doesn't affect the goods. There were so many challenges to this one. Syncing the clocks are a nightmare, reading and writing out of the same serial port is tough and Arduinos have less memory than that fish from Finding Nemo that has difficulties remembering things... what's her name again? ## Accomplishments that we're proud of Lasers. ## What we learned Lasers. And what they shouldn't be used for. ## What's next for Lazier Laser Oh, this project is so retired.
partial
# InstaQuote InstaQuote is an SMS based service that allows users to get a new car insurance quote without the hassle of calling their insurance provider and waiting in a long queue. # What Inspired You We wanted a more convenient way to get a quote on auto-insurance in the event of a change within your driver profile (i.e. demerit point change, license class increase, new car make, etc...) Since insurance rates are not something that change often we found it appropriate to create an SMS based service, thus saving the hassle of installing an app that would rarely be used as well as the time of calling your insurance provider to get a simple quote. As a company, this service would be useful for clients because it gives them peace of mind that there is an overarching service which can be texted anytime for an instant quote. # What We Learned We learned how to connect API's using Standard Library and we also learned JavaScript. Additionally, we learned how to use backend databases to store information and manipulate that data within the database. # Challenges We Faced We had some trouble with understanding and getting used to JavaScript syntax
## Inspiration Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders. ## What it does Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout. ## How we built it The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive. ## Challenges we ran into > > I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry* > > > ## Accomplishments that we're proud of > > This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry* > > > ## What we learned > > Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry* > > > ## What's next for Quick Quote Things that are on our roadmap to improve Quick Quote: * Apply google analytics to track user's movement and collect feedbacks to enhance our UI. * Enhance our neural network model to enrich our knowledge base. * Train our data with more evalution to give more depth * Includes ads (mostly auto companies ads).
## Inspiration We were inspired by the sheer amount of data and what mysterious we could find. ## What it does Our project involves a summary of different features we considered during data analysis. These involve interesting discussions of very unusual and unexpected trends in the Life Insurance Quoting data. We used the remaining features that do affect the premiums as features to train our machine learning model. The idea is to be able to predict quotes or choices of profiles. ## How we built it We analyzed using python's pandas library, R, and a couple JS visualization modules. The machine learning uses Tensorflow. ## Challenges we ran into We were unable to get the full data from the source due to unstable and slow internet until Saturday night. ## Accomplishments that we're proud of We were nonetheless able to inspect the data thoroughly, produce interesting data visualization and findings that lead to open-ended discussions. We also have a great UI that can be useful to customers shopping for a life insurance. ## What we learned We learned a lot about analyzing data using Pandas, Flask, and also Tensorflow. ## What's next for Vitech Data Analysis The data given is very strange. It would be interesting to look into how those data were acquired, dig into the reasons certain unexpected phenomenon were happening, and so forth. From that, perhaps a better quote-producing formula/model can be produced that takes into consideration a greater number of more relevant factors.
winning
## Inspiration Our inspiration came from our own experiences, our friend's experiences, and experiences we found online about traveling. We came to the conclusion that young adults of our generation are prioritizing different aspects of their travel experiences than what current tech tools help us find. A few aspects were a preference for authenticity, a dislike of overcrowded tourist spots, and the desire to find hidden gems. ## What it does Wisp is a travel location information exchange and crowdsourcing app. Using a tokenized system, in exchange for sharing your own knowledge about your favorite local hidden gems, you gain access to search results from specific destinations. This limited exchange system aims to maintain the quality of the entries made to the app and foster a sense of community. ## How we built it Wisp is a web app built using React. ## Challenges we ran into We ran into issues in initial ideation that ended up reducing our time to be able work on the product itself. Wisp started as a very vague and general idea concerning itineraries. Travel recommendations have been done many times so we really needed to take time to get to the core of what we wanted to accomplish. We ultimately came up with a design we liked though. ## Accomplishments that we're proud of We're proud of how we were able to utilize our skills on this project in ways that complemented each other. We're a team that was formed last minute, with two of our team members participating remotely, and spanning 3 time zones and two continents. Thus, it honestly was a big accomplishment to even be able to put this project together. ## What we learned We learned a lot about what sort of pain points and issues we should consider when coming up with a pitch in general; it definitely exercised our analytical skills. A lot of these lessons came from attending YHack panels. We all also definitely grew technically, growing familiar with React and integrating it with other back-end programs. ## What's next for Wisp We are not sure what is next for Wisp, the project itself, but we are definitely carrying all the skills and assets we built for Wisp into our future projects!
## Inspiration The inspiration for **Voyago** came from the desire to create a personalized travel assistant that makes trip planning easy and fun. We noticed that many people struggle to plan trips that cater to both their budget and preferences, especially when traveling in groups. We wanted to build a tool that simplifies the process by generating itineraries tailored to individual interests, group dynamics, and real-time travel conditions. ## What it does **Voyago** is a travel planning tool that creates customized itineraries for travelers. By taking into account preferences like budget, travel style, food preferences, and transportation options, it suggests activities, restaurants, and attractions that match the group's interests. It also considers the duration of the trip and adjusts the itinerary to fit within the available time, ensuring the best experience for all travelers. ## How we built it We built **Voyago** as an iOS application using **Swift** for the entire development process. Our tech stack includes: * **Frontend**: Swift, leveraging SwiftUI for a responsive and intuitive user interface that allows users to input their preferences easily. * **API Integrations**: We utilized the **Gemini API** for personalized travel recommendations and the **Google Places API** to suggest restaurants, attractions, and activities based on the traveler's location and preferences. * **Backend Services**: We used **Firebase Firestore** for real-time database management, enabling us to store user data, itineraries, and preferences efficiently. This also allows for easy updates and retrieval of information. * **Analytics**: We implemented **Firebase Analytics** to track user engagement and behavior, helping us understand how users interact with the app and identify areas for improvement. ## Challenges we ran into * **Data integration**: Integrating the Gemini API and Google Places API required significant troubleshooting to ensure compatibility and data consistency. * **User experience**: Designing an intuitive user interface that accommodates varying traveler preferences while maintaining a seamless experience was a challenging task. * **Personalization**: Ensuring the generated itineraries were unique to each user’s preferences and accounted for group dynamics required advanced logic and thoughtful design. ## Accomplishments that we're proud of * Successfully developing a fully functional iOS app that delivers personalized itineraries based on user input and real-time data. * Building a responsive and user-friendly interface entirely in Swift, enhancing the overall user experience. * Efficiently integrating external APIs and Firebase services to provide a diverse range of activity and restaurant suggestions that cater to different tastes and budgets. ## What we learned Through this project, we learned: * How to effectively use Swift for iOS development, improving our skills in app design and architecture. * The intricacies of working with multiple APIs, particularly in how to manage and display data dynamically. * The power of Firebase Firestore for real-time data management and how analytics can guide development decisions based on user behavior. ## What's next for Voyago We plan to continue improving **Voyago** by: * Incorporating machine learning to offer smarter recommendations based on past trips and preferences. * Adding more transportation options and considering factors like weather and local events in the itinerary generation. * Expanding the app's features to include expense tracking and social sharing capabilities, allowing users to share their plans with friends and document their experiences.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
losing
## The problem it solves : During these tough times when humanity is struggling to survive, **it is essential to maintain social distancing and proper hygiene.** As a big crowd now is approaching the **vaccination centres**, it is obvious that there will be overcrowding. This project implements virtual queues which will ensure social distancing and allow people to stand separate instead of crowding near the counter or the reception site which is a evolving necessity in covid settings! **“ With Quelix, you can just scan the OR code, enter the virtual world of queues and wait for your turn to arrive. Timely notifications will keep the user updated about his position in the Queue.”** ## Key-Features * **Just scan the OR code!** * **Enter the virtual world of queues and wait for your turn to arrive.** * **Timely notifications/sound alerts will keep the user updated about his position/Time Left in the Queue.** * **Automated Check-in Authentication System for Following the Queue.** * **Admin Can Pause the Queue.** * **Admin now have the power to remove anyone from the queue** * Reduces Crowding to Great Extent. * Efficient Operation with Minimum Cost/No additional hardware Required * Completely Contactless ## Challenges we ran into : * Simultaneous Synchronisation of admin & queue members with instant Updates. * Implementing Queue Data structure in MongoDB * Building OTP API just from scratch using Flask. ``` while(quelix.on): if covid_cases.slope<0: print(True) >>> True ``` [Github repo](https://github.com/Dart9000/Quelix2.0) [OTP-API-repo](https://github.com/Dart9000/OTP-flask-API) [Deployment](https://quelix.herokuapp.com/)
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration We can make customer support so, so much better, for both customers and organizations alike. This project was inspired by the frankly terrible wait times and customer support information that the DriveTest centres across Ontario have. ## What it does From a high-level, our platform integrates analytics of both in-person and online customer support lines (using computer vision and Genesys's API, respectively), and uses those to provide customers real-time data for which customer support channel to utilize at a given time. It also provides organizations with analytics and metrics to be able to optimize their customer support pipelines. Speaking about the internals, our platform utilizes computer vision to determine the number of people in a line at any given moment, then uses AI to calculate the approximate waiting time for that line. This usecase is meant for in-person customer support interactions. Our platform also uses Genesys's Developer API (EstimateWaitTime) to calculate, for any given organization and queue, the wait time and backlog of customer support cases. It then combines these two forms of customer support, allowing customers to make informed decisions as to where to go for customer support, and giving organizations robust analytics for which customer support channels can be further optimized (such as hiring more people to serve as chat agents). ## How we built it OpenCV along with a custom algorithm for people counting within a certain bounding area was used for the Computer Vision aspect to determine the number of people in a line, in-person. This data is sent to a Flask server. We also used Genesys's API along with simulated Genesys customer-agent interactions to determine how long the wait time is for online customer support. From the Flask server, this data goes to two different front-ends: 1. For customers: customers simply see a dashboard with the wait time for online customer support, as well as the wait times at nearby branches of the company (say, Ontario DriveTest centres) – created using Bulma and Vanilla JS 2. For organizations: organizations see robust analytics regarding wait times at certain intervals, certain points in the day, etc. They can also compare and contrast online and in-person customer support analytics. Organizations can use these metrics to optimize customer support to reduce the load on certain employees, and by making customer support more efficient for customers. ## Challenges we ran into Working with many services (Computer Vision + Python, Flask backend, Vanilla JS frontend, Vue.js frontend) was a challenge, since we had to find a way to pass the data from one service to another, reliably. We decided to fix this by using a key-value store for redundancy to ensure data is not lost through numerous layers of transmission. ## Accomplishments that we're proud of Creating a working product using Genesys's API! ## What we learned The opportunity that lies within the field of customer support and unifying both online and in-person components of it. Also, the opportunities that Genesys's API holds in terms of empowering organizations to make their customer support as efficient as possible. ## What's next for QuicQ We wanted to use infrared sensors instead of cameras to detect people in a line in-person, due to privacy concerns, but we couldn't find infrared sensors for this hackathon! So, we will integrate them in a future version of QuicQ.
winning
## Inspiration Growing up in the early 2000s, communiplant's founding team knew what it was like to grow up in vibrant communities, interconnected interpersonal and naturally. Today's post-covid fragmented society lacks the community and optimism that kept us going. The lack of optimism is especially evident through our climate crisis: an issue that falls outside most individuals loci of control. That said, we owe it to ourselves and future generations to keep hope for a better future alive, **and that future starts on the communal level**. Here at Communiplant, we hope to help communities realize the beauty of street-level biodiversity, shepherding the optimism needed for a brighter future. ## What it does Communiplant allows community members to engage with their community while realizing their jurisdiction's potential for sustainable development. Firstly, the communiplant analyzes satellite imagery using machine learning and computer vision models to calculate the community's NDMI vegetation indices. Beyond that, community members can individually contribute to their community on Commuiplant by uploading images of various flora and fauna they see daily in their community. Using computer vision models, our system can label the plantlife uploaded to the system, serving as a mosaic representing the communities biodiversity. Finally, to further engage with their communities, users can participate in the community through participation in a variety of community events. ## How we built it Communitech is a fullstack web application developed using React & Vite for the frontend, and Django on the backend. We used AWS's cloud suite for relational data storage: storing user records. Beyond that, however, we used AWS to implement the algorithms necessary for the complex categorizations that we needed to make. Namely. we used AWS S3 object storage to maintain our various clusters. Finally, we used a variety of browser-level apis, including but not limited to the google maps API and the google earth engine API. ## Challenges we ran into While UOttahack6 has been incredibly rewarding, it has not been without it challenges. Namely, we found that attempting to use bleeding-edge new technologies that we had little experience with in conjunction led to a host of technical issues. First and most significantly, we found it difficult implementing cloud based artificial intelligence workflows for the first time. We also had a lot of issues with some of the browser-level maps APIs, as we found that the documentation for some of those resources was insufficient for our experience level. ## Accomplishments that we're proud of Regardless of the final result, we are happy to have made a final product with a concrete use case that has potential to become major player in the sustainability space. All in all however, we are mainly proud that through it all we were able to show technical resilience. There were many late night moments where we didn't really see a way out, or where we would have to cut out a significant amount of functionality from our final product. Regardless we pushed though, and those experiences are what we will end up remembering UOttahack for. ## What's next for Communiplant The future is bright for Communplant with many features on the way. Of these, the most significant are related to the mapping functionality. Currently, user inputted flora and fauna live only in a photo album on the community page. Going forwards we hope to have images linked to geographic points, or pins on the map. Regardless of Communiplant's future actions, however, we will keep our guarantee to support sustainability on all scales.
## Inspiration Across the globe, a critical shortage of qualified teachers poses a significant challenge to education. The average student-to-teacher ratio in primary schools worldwide stands at an alarming **23:1!** In some regions of Africa, this ratio skyrockets to an astonishing **40:1**. [Research 1](https://data.worldbank.org/indicator/SE.PRM.ENRL.TC.ZS) and [Research 2](https://read.oecd-ilibrary.org/education/education-at-a-glance-2023_e13bef63-en#page11) As populations continue to explode, the demand for quality education has never been higher, yet the *supply of capable teachers is dwindling*. This results in students receiving neither the attention nor the **personalized support** they desperately need from their educators. Moreover, a staggering **20% of students** experience social anxiety when seeking help from their teachers. This anxiety can severely hinder their educational performance and overall learning experience. [Research 3](https://www.cambridge.org/core/journals/psychological-medicine/article/much-more-than-just-shyness-the-impact-of-social-anxiety-disorder-on-educational-performance-across-the-lifespan/1E0D728FDAF1049CDD77721EB84A8724) While many educational platforms leverage generative AI to offer personalized support, we envision something even more revolutionary. Introducing **TeachXR—a fully voiced, interactive, and hyper-personalized AI** teacher that allows students to engage just like they would with a real educator, all within the immersive realm of extended reality. *Imagine a world where every student has access to a dedicated tutor who can cater to their unique learning styles and needs. With TeachXR, we can transform education, making personalized learning accessible to all. Join us on this journey to revolutionize education and bridge the gap in teacher shortages!* ## What it does **Introducing TeachVR: Your Interactive XR Study Assistant** TeachVR is not just a simple voice-activated Q&A AI; it’s a **fully interactive extended reality study assistant** designed to enhance your learning experience. Here’s what it can do: * **Intuitive Interaction**: Use natural hand gestures to circle the part of a textbook page that confuses you. * **Focused Questions**: Ask specific questions about the selected text for summaries, explanations, or elaborations. * **Human-like Engagement**: Interact with TeachVR just like you would with a real person, enjoying **milliseconds response times** and a human voice powered by **Vapi.ai**. * **Multimodal Learning**: Visualize the concepts you’re asking about, aiding in deeper understanding. * **Personalized and Private**: All interactions are tailored to your unique learning style and remain completely confidential. ### How to Ask Questions: 1. **Circle the Text**: Point your finger and circle the paragraph you want to inquire about. 2. **OK Gesture**: Use the OK gesture to crop the image and submit your question. ### TeachVR's Capabilities: * **Summarization**: Gain a clear understanding of the paragraph's meaning. TeachVR captures both book pages to provide context. * **Examples**: Receive relevant examples related to the paragraph. * **Visualization**: When applicable, TeachVR can present a visual representation of the concepts discussed. * **Unlimited Queries**: Feel free to ask anything! If it’s something your teacher can answer, TeachVR can too! ### Interactive and Dynamic: TeachVR operates just like a human. You can even interrupt the AI if you feel it’s not addressing your needs effectively! ## How we built it **TeachXR: A Technological Innovation in Education** TeachXR is the culmination of advanced technologies, built on a microservice architecture. Each component focuses on delivering essential functionalities: ### 1. Gesture Detection and Image Cropping We have developed and fine-tuned a **hand gesture detection system** that reliably identifies gestures for cropping based on **MediaPipe gesture detection**. Additionally, we created a custom **bounding box cropping algorithm** to ensure that the desired paragraphs are accurately cropped by users for further Q&A. ### 2. OCR (Word Detection) Utilizing **Google AI OCR service**, we efficiently detect words within the cropped paragraphs, ensuring speed, accuracy, and stability. Given our priority on latency—especially when simulating interactions like pointing at a book—this approach aligns perfectly with our objectives. ### 3. Real-time Data Orchestration Our goal is to replicate the natural interaction between a student and a teacher as closely as possible. As mentioned, latency is critical. To facilitate the transfer of image and text data, as well as real-time streaming from the OCR service to the voiced assistant, we built a robust data flow system using the **SingleStore database**. Its powerful real-time data processing and lightning-fast queries enable us to achieve sub-1-second cropping and assistant understanding for prompt question-and-answer interactions. ### 4. Voiced Assistant To ensure a natural interaction between students and TeachXR, we leverage **Vapi**, a natural voice interaction orchestration service that enhances our feature development. By using **DeepGram** for transcription, **Google Gemini 1.5 flash model** as the AI “brain,” and **Cartesia** for a natural voice, we provide a unique and interactive experience with your virtual teacher—all within TeachXR. ## Challenges we ran into ### Challenges in Developing TeachXR Building the architecture to keep the user-cropped image in sync with the chat on the frontend posed a significant challenge. Due to the limitations of the **Meta Quest 3**, we had to run local gesture detection directly on the headset and stream the detected image to another microservice hosted in the cloud. This required us to carefully adjust the size and details of the images while deploying a hybrid model of microservices. Ultimately, we successfully navigated these challenges. Another difficulty was tuning our voiced assistant. The venue we were working in was quite loud, making background noise inevitable. We had to fine-tune several settings to ensure our assistant provided a smooth and natural interaction experience. ## Accomplishments that we're proud of ### Achievements We are proud to present a complete and functional MVP! The cropped image and all related processes occur in **under 1 second**, significantly enhancing the natural interaction between the student and **TeachVR**. ## What we learned ### Developing a Great AI Application We successfully transformed a solid idea into reality by utilizing the right tools and technologies. There are many excellent pre-built solutions available, such as **Vapi**, which has been invaluable in helping us implement a voice interface. It provides a user-friendly and intuitive experience, complete with numerous settings and plug-and-play options for transcription, models, and voice solutions. ## What's next for TeachXR We’re excited to think of the future of **TeachXR** holds even greater innovations! we’ll be considering\**adaptive learning algorithms*\* that tailor content in real-time based on each student’s progress and engagement. Additionally, we will work on integrating **multi-language support** to ensure that students from diverse backgrounds can benefit from personalized education. With these enhancements, TeachXR will not only bridge the teacher shortage gap but also empower every student to thrive, no matter where they are in the world!
## Inspiration The inspiration for this project comes from my town back in New York. There was an environmental committee in my town that would create projects and clean-ups, where the community can work together to increase sustainability in the community. An example of this, would be every month EcoPel (the environmental committee) would have a town clean up where people gather and go to different places around town to pick up garbage. Our inspiration for the project came from this committee and kinds of environmental sustainable events that they held. ## What it does Our site increases transparency across a community, and allows for increased collaboration to help the environment at all times. Our site uses google maps, and allows users to drop a pin on the map at a spot in the area where there is a lot of garbage, and needs cleaning up. Dropping a pin then prompts the user to input their name and date of when they plan to clean to help clean it up. This information, as well as the location of the marker will then show up in a data table on the website, so that other users can see when others in the community are going to clean up areas of their town/city. ## How I built it We used the Google Firebase to create a webpage online. We then used the Google Maps API to get google maps on our webpage. We then used HTML, javascript and CSS to change the API to allow popups and contact forms directly on the map. We also changed the formatting of the webpage, and allowed the data inputted by the user to be shown on the website data table for other users to see. ## Challenges I ran into The biggest challenge we ran into was getting a good idea for our project. We originally had a different idea, which we worked on for 5-6 hours, and then had to scratch it because of technical difficulties. So we had a smaller amount of time to implement this project and get it to the level that we wanted it to be at. Other difficulties we faced as a group were changing features of the Google Maps API, as none of us have used it before. ## Accomplishments that I'm proud of As a group I'm proud that we were able to quickly change paths and come up with something new, with time not being on our side. We all stayed up late working really hard, and I'm proud of the amount of work everyone put in, and the amount of passion everyone showed towards the project. I'm also proud of our group's idea, as we think the environment is a huge concern in today's world, and we all want to make a difference. ## What I learned I learned how to use the Google Maps API, as well as the Google Firebase. We all got more comfortable coding in HTML, CSS and javascript, as those were not our strongest languages coming into the project. We also learned how to divvy up the work more efficiently, so that we can achieve goals quicker. ## What's next for WeClean WeClean still has much potential to grow as an idea, and also grow in terms of its development. Some ideas in the future we have as a group would be to make WeClean into an app, so that people on their mobiles can access it quicker. We could also create an account/points system in the future, to incentive people to clean more, as well as have accounts for increased security.
winning
## Inspiration Toronto is famous because it is tied for the second longest average commute time of any city (96 minutes, both ways). People love to complain about the TTC and many people have legitimate reasons for avoiding public transit. With our app, we hope to change this. Our aim is to change the public's perspective of transit in Toronto by creating a more engaging and connected experience. ## What it does We built an iOS app that transforms the subway experience. We display important information to subway riders, such as ETA, current/next station, as well as information about events and points of interest in Toronto. In addition, we allow people to connect by participating in a local chat and multiplayer games. We have small web servers running on ESP8266 micro-controllers that will be implemented in TTC subway cars. These micro-controllers create a LAN (Local Area Network) Intranet and allow commuters to connect with each other on the local network using our app. The ESP8266 micro-controllers also connect to the internet when available and can send data to Microsoft Azure. ## How we built it The front end of our app is built using Swift for iOS devices, however, all devices can connect to the network and an Android app is planned for the future. The live chat section was built with JavaScript. The back end is built using C++ on the ESP8266 micro-controller, while a Python script handles the interactions with Azure. The ESP8266 micro-controller runs in both access point (AP) and station (STA) modes, and is fitted with a button that can push data to Azure. ## Challenges we ran into Getting the WebView to render properly on the iOS app was tricky. There was a good amount of tinkering with configuration due to the page being served over http on a local area network (LAN). Our ESP8266 Micro-controller is a very nifty device, but such a low cost device comes with strict development rules. The RAM and flash size were puny and special care was needed to be taken to ensure a stable foundation. This meant only being able to use vanilla JS (no Jquery, too big) and keeping code as optimized as possible. We built the live chat room with XHR and Ajax, as opposed to using a websocket, which is more ideal. ## Accomplishments that we're proud of We are proud of our UI design. We think that our app looks pretty dope! We're also happy of being able to integrate many different features into our project. We had to learn about communication between many different tech layers. We managed to design a live chat room that can handle multiple users at once and run it on a micro-controller with 80KiB of RAM. All the code on the micro-controller was designed to be as lightweight as possible, as we only had 500KB in total flash storage. ## What we learned We learned how to code as lightly as possible with the tight restrictions of the chip. We also learned how to start and deploy on Azure, as well as how to interface between our micro-controller and the cloud. ## What's next for Commutr There is a lot of additional functionality that we can add, things like: Presto integration, geolocation, and an emergency alert system. In order to host and serve larger images, the ESP8266' measly 500KB of storage is planning on being upgraded with an SD card module that can increase storage into the gigabytes. Using this, we can plan to bring fully fledged WiFi connectivity to Toronto's underground railway.
## Inspiration My inspiration for creating CityBlitz was getting lost in Ottawa TWO SEPARATE TIMES on Friday. Since it was my first time in the city, I honestly didn't know how to use the O-Train or even whether Ottawa had buses in operation or not. I realized that if there existed an engaging game that could map hotspots in Ottawa and ways to get to them, I probably wouldn't have had such a hard time navigating on Friday. Plus, I wanted to actively contribute to sustainability, hence the trophies for climate charities pledge. ## What it does CityBlitz is a top-down pixelated roleplay game that leads players on a journey through Ottawa, Canada. It encourages players to use critical thinking skills to solve problems and to familiarize themselves with navigation in a big city, all while using in-game rewards to make a positive difference in sustainability. ## How I built it * Entirely coded using Javax swing * All 250+ graphics assets are hand-drawn using Adobe Photoshop * All original artwork * In-game map layouts copy real-life street layouts * Buildings like the parliament and the O-Train station are mimicked from real-life * Elements like taxis and street signs also mimic those of Ottawa ## Challenges I ran into Finding the right balance between a puzzle RPG being too difficult/unintuitive for players vs. spoonfeeding the players every solution was the hardest part of this project. This was overcome through trial and error as well as peer testing and feedback. ## Accomplishments that we're proud of Over 250 original graphics, a fully functioning RPG, a sustainability feature, and overall gameplay. ## What I learned I learned how to implement real-world elements like street layouts and transit systems into a game for users to familiarize themselves with the city in question. I also learned how to use GitHub and DevPost, how to create a repository, update git files, create a demo video, participate in a hackathon challenge, submit a hackathon project, and pitch a hackathon project. ## What's next for CityBlitz Though Ottawa was the original map for CityBlitz, the game aims to create versions/maps centering around other major metropolitan areas like Toronto, New York City, Barcelona, Shanghai, and Mexico City. In the future, CityBlitz aims to partner with these municipal governments to be publicly implemented in schools for kids to engage with, around the city for users to discover, and to be displayed on tourism platforms to attract people to the city in question.
## Inspiration Our frustrations with the lack of transparency regarding where a package "in-transit" actually is. ## What it does PackageHound is a device that improves Canada Post's "in-transit" parcel tracking state to show more accurate status as well as allow better delivery time estimates. Our project is a combination WiFi enabled micro-controller (ESP32) and mobile app. Our IoT device will be attached to the outside of packages like a shipping label, and whenever the package reaches a new destination (E.g: Moved from pre-sort to sort) it will connect to a Wireless router and send an message to our server changing it's transit state. On our mobile app users are able to enter the tracking number of their parcels and find them displayed on a map, along with their improved tracking state information ## How we built it Our physical device is an ESP32 micro-controller programmed to connect to Canada Post WiFi nodes and send the node currently connected node to our server, through MQTT to the Solace Message Broker software. Our server is written in Python and hosted on the Google Cloud. It uses MQTT and the Solace Message Broker to listen for any changes to package states and update the database, as well as handles any requests by our app for a packages state. ## Challenges we ran into Originally our back-end was designed using a REST API and had to be rewritten to use MQTT and the Solace Message Broker. This initially created a large challenge of rewriting our embedded code and our server code. Luckily the sponsors at Solace were very helpful and when we had difficulties using MQTT they walked us through the process of integrating it within our code. ## Accomplishments that we're proud of We had never used an ESP32 or Solace's Message Broker before, so it was quite challenging to develop for them, let alone combine them into a single project, so we are very proud that we were able successfully develop our project using both of them. ## What we learned We learnt lots about the Publish/Subscribe messaging model, as well as lots about programming with the ESP32 and utilizing it's WiFi functionality. ## What's next for PackageHound We hope to develop PackageHound further with InnovaPost and turn our prototype into a reality!
winning
## our why Dialects, Lingoes, Creoles, Acrolects are more than just words, more than just languages - there are a means for cultural immersion, intangible pieces of tradition and history passed down through generations. Remarkably two of the industry giants lag far behind - Google Translate doesn't support translations for the majority of dialects and ChatGPT's responses can be likened to a dog meowing or a cat barking. Aiden grew up in Trinidad and Tobago, a native creole (patois) speaker; Nuween in Afghanistan making memories with his extended family in hazaragi, and Halle and Savvy though Canadian show their love and appreciation at home, in Cantonese and Mandarin, with their parents who are both 1st gen immigrants. How can we bring dialect speakers and even non-dialect speakers alike together? How can we traverse cultures, when the infrastructure to do so isn’t up to par? ## pitta-patta, our solution Metet Pitta-Patta—an LLM-powered, voice-to-text web app designed to bridge cultural barriers and bring people together through language, no matter where they are. With our innovative dialect translation system for underrepresented minorities, we enable users to seamlessly convert between standard English and dialects. Currently, we support Trinidadian Creole as our proof of concept, with plans to expand further, championing a cause dear to all of us. ## our building journey Model: Our project is built on a Sequence-to-Sequence (Seq2Seq) model, tailored to translate Trinidadian Creole slang to English and back. The encoder compresses the input into a context vector, while the decoder generates the output sequence. We chose Long Short-Term Memory (LSTM) networks to handle the complexity of sequential data. To prepare our data, we clean it by removing unnecessary prefixes and adding start and end tokens to guide the model. We then tokenize the text, converting words to integers and defining an out-of-vocabulary token for unknown words. Finally, we pad the sequences to ensure they’re uniform in length. The architecture includes an embedding layer that turns words into dense vectors, capturing their meanings. As the encoder processes each word, it produces hidden states that initialize the decoder, which predicts the next word in the sequence. Our decode\_sequence() function takes care of translating Trinidadian Creole into English, generating one word at a time until it reaches the end. This allows us to create meaningful connections through language, one sentence at a time. Frontend: The Front end was done using stream-lit. **Challenges we ran into** 1. This was our first time using Databricks and their services - while we did get Tensorflow up, it was pretty painful to utilize spark and also attempting to run llm models within the databricks environment - we eventually abandoned that plan. 2. We had a bit of difficulty connecting the llm to the backend - a small chink along the way, where calling the model would always result in retraining - slight tweaks in the logic fixed this. 3. We had a few issues in training the llm in terms of the data format of the input - this was fixed with the explicit encoder and decoder logic **Accomplishments that we're proud of** 1. This was our first time using streamlit to build the front-end and in the end it was done quite smoothly. 2. We trained an llm to recognise and complete dialect! ## looking far, far, ahead We envision an exciting timeline for Pitta-Patta. Our goal is to develop a Software Development Kit (SDK) that small translation companies can utilize, empowering them to integrate our dialect translation capabilities into their platforms. This will not only broaden access to underrepresented dialects but also elevate the importance of cultural nuances in communication. Additionally, we plan to create a consumer-focused web app that makes our translation tools accessible to everyday users. This app will not only facilitate seamless communication but also serve as a cultural exchange platform, allowing users to explore the richness of various dialects and connect with speakers around the world. With these initiatives, we aim to inspire a new wave of cultural understanding and appreciation. Made with coffee, red bull, and pizza.
## What it does Eloquent has two primary functions, both influenced by a connection between speaking and learning The first is a public speaking coach, to help people practice their speeches. Users can import a speech or opt to ad-lib — the app will then listen to the user speak. When they finish, the app will present a variety of feedback: whether or not the user talked to fast, how many filler words they used, the informality of their language, etc. The user can take this feedback and continue to practice their speech, eventually perfecting it. The second is a study tool, inspired by a philosophy that teaching promotes learning. Users can import Quizlet flashcard sets — the app then uses those flashcards to prompt the user, asking them to explain a topic or idea from the set. The app listens to the user's response, and determines whether or not the answer was satisfactory. If it was, the user can move on to the next question; but if it wasn't, the app will ask clarifying questions, leading the user towards a more complete answer. ## How we built it The main technologies we used were Swift and Houndify. Swift, of course, was used to build our iOS app and code its logic. We used Houndify to transcribe the user's speech into text. We also took advantage of Houndify's "client matches" feature to improve accuracy when listening for keywords. Much of our NLP analysis was custom-built in Swift, without a library. One feature that we used a library for, though, was keyword extraction. For this, we used a library called Reductio, which implements the TextRank algorithm in Swift. Actually, we used a fork of Reductio, since we had to make some small changes to the build-tools version of the library to make it compatible with our app. Finally, we used a lightweight HTML Parsing and Searching library called Kanna to web-scrape Quizlet data. ## Challenges we ran into I (Charlie) found it quite difficult to work on an iOS app, since I do not have a Mac. Coding in Swift without a Mac proved to be a challenge, since many powerful Swift libraries and tools are exclusive to Apple systems. This issue was partially alleviated by the decision to do most of the NLP analysis from the ground up, without an NLP library — in some cases though, coding without the ability to debug on my own machine was unavoidable. We also had some difficulties with the Houndify API, but the #houndify Slack channel proved very useful. We ended up having to use some custom animations instead of Houndify's built-in one, but in the end, we solved all functionality issues.
## Inspiration We both are interested in linguistics and learning languages, but one of the biggest roadblocks to regular practice was that we couldn't find enough entertaining media to watch in a given language. We wanted to build software that could automatically translate the audio of a given video into another language so that we could watch whatever content we wanted in whatever language we wanted to learn. Throughout the process, we realized it could make a huge difference to vision-impaired people who need to get information from a video but are unable to use subtitles. It can also be valuable to people who do not understand the language a given video that does not offer subtitles. ## What it does Given the URL of any YouTube video, a destination language the video is to be translated into, and an output filename, our software writes a translated video to the local disk of whatever machine this code is running on. The proof-of-concept backend could easily be served through a website or mobile app for distribution in the future. ## How we built it Our software takes in the URL of any YouTube video, a destination language the video is to be translated into, and an output filename for the final translated video. It then downloads the video, along with any subtitles that are available from YouTube. If subtitles are not available, we use Google's Web Speech API (via Autosub) to auto-generate subtitles. We then programmatically build a new audio file, parsing the subtitle files for timing information to ensure the audio stays synced with the video. Finally, we strip the audio from the video, use sklearn and scipy to tease apart the voice from the rest of the audio, combine the non-voice audio with the translated speech, and combine with the video component. ## Challenges we ran into One of the biggest challenges was keeping the audio in-sync with the video. We wrote our own parser for the .vtt filetype (a subtitle filetype) and then wrote an algorithm that detects pauses in speech and segments the subtitle text into chunks accordingly. It then uses these chunks to add silence to the audio file if the pause is longer than a certain threshold. Another challenge was separating the voice from the rest of the audio in the original video. We wanted all other sounds, i.e. theme songs, sound effects, and general noise to be included in the final video to maximize similarity to dubbing by a human being. We used Independent Component Analysis and then adjusted the volume level of the noise track to accomplish this. ## Accomplishments that we're proud of We're very proud of the overall impact this project could have for disabled people and foreign language learners. We're also happy with the quality of the final translated video we were able to achieve, and have many plans for future optimizations and services. ## What we learned We learned a lot about best software engineering practices and are proud of the overall organization we were able to achieve in the repo. We also learned a lot about audio/video formats, as well as various machine learning topics. ## What's next for a.tv We plan to host the service on a website and/or mobile app in the coming weeks, as well as further improve synchronization, automatically generate male/female voice based on original video, and possibly even use sentiment analysis to create even more realistic dubs.
partial
## Inspiration This project was inspired by a deep dive into Kanye West's unreleased discography. I discovered two interesting things that would form the main use cases for my project: 1. Your favorite songs' original samples are so far removed from the listening experience they might as well not exist 2. Linear playlist-making leaves no room for meta-expression I'll make this more concrete with specific examples: 1. I loved the song from The Life of Pablo "Father Stretch My Hands, Pt. 1" and listened to it frequently, but it took niche YouTube videos for me to be exposed to the original sample, which I quickly fell in love with--I was listening to it when I found out I got into Stanford. Why wasn't it easier for me to be exposed to this? 2. I discovered mutliple versions of the popular Kanye album My Beautiful Dark Twisted Fantasy, including a studio leak without overcompressed mastering and a fan-edit that extended samples and included exclusive content found only in live shows and the short film he made for the album titled ["Runaway"](https://www.youtube.com/watch?v=Jg5wkZ-dJXA). I wanted to be able to click a button whenever I want to listen to this album that chooses between these and the original album at random, but it did not exist. Why can't I manage relationships between groups of songs on a higher level? ## What it does You can edit the code to create a nonlinear playlist out of songs on your computer, linking them together and customizing the transition probabilities. This program can navigate through the playlist, playing the files on your local computer. Empty song nodes make it possible to implement the feature I referred to earlier, that would randomly choose between versions of the album for me to listen to. ## How we built it This project uses os.startfile() to run the audio files, and utilizes Python classes to represent songs and playlists. Dicts are used to store playlist transition probabilities for going to the next song, and a last-in-first-out queue is used to keep track of the play history for rewinding. ## Challenges we ran into No one on our team knew JavaScript, so it has no front-end. I tried learning but due to time constraints, eventually I just built up a back-end in Python and that seemed to suffice. We struggled to contrain the ambitiousness of the project and were only able to build the MVP of something much broader. ## Accomplishments that we're proud of -Creating an MVP that works! -Iterating and exploring a wide variety of interesting use cases ## What we learned -You don't need to know the "right" programming language before you can make a prototype of something that excites you. ## Next Steps -Building a frontend in JS, click-and-drag to create connections between songs -Spotify API integration: Import your Spotify playlists -Utilize [Samplify](https://github.com/qzdl/Samplify) to create a knowledge-graph of samples to browse & explore
## Inspiration By defining Nostalgia 'a sentimental longing or wistful affection for the past, typically for a period or place with happy personal associations', we (former team) believed that by working backwards – we can find a way to trigger these emotions via music. When listening to music, our brains release serotonin (which is what happens when we have a nostalgic feeling), and by realizing that, we targetted trying to trigger those chemicals releases leading a person to get to this 'nostalgic state'. Since nostalgia is subjective and each person feels it differently based on their past and previous associations, we wanted to be able to predict how to curate the perfect nostalgic playlist for users by using data from their past. ## What it does NostalgicLists uses AI and inputs about a human's past to be able to curate a playlist for a user that's meant to be nostalgic when listened to. ## How we built it At first, we tried to use Cohere but then there were some timeout problems that we couldn't control on our side and decided that ChatGPT was more optimal (but we would've used Cohere had it worked at the time). We give ChatGPT the inputs about a user, ask it to generate a list of songs that are most likely to be nostalgic to the user given their past. With that list, we then do a look up on Spotify's side (via API), get the Spotify IDs of all the songs (from Spotify), then create a new playlist and add those songs to it by ID. ## Challenges we ran into Being able to receive the suggested songs as JSON and parse the list into something that's digestible by Spotify's API. We also ran into lag-times between getting suggestions of songs, then doing lookups for each song. ## Accomplishments that we're proud of Being able to talk to Spotify's API super reliably and not have to do any weird work arounds. ## What we learned Figma, React, Next.js, Prisma, REST API calls. ## What's next for NostalgicLists Being able to pull info from third-party oauth providers (Facebook, Google, etc.) to make it even more personalized and accurate. Another feature can be the ability to have AI-generated nostalgic playlist covers.
## Motivation Our motivation was a grand piano that has sat in our project lab at SFU for the past 2 years. The piano was Richard Kwok's grandfathers friend and was being converted into a piano scroll playing piano. We had an excessive amount of piano scrolls that were acting as door stops and we wanted to hear these songs from the early 20th century. We decided to pursue a method to digitally convert the piano scrolls into a digital copy of the song. The system scrolls through the entire piano scroll and uses openCV to convert the scroll markings to individual notes. The array of notes are converted in near real time to an MIDI file that can be played once complete. ## Technology The scrolling through the piano scroll utilized a DC motor control by arduino via an H-Bridge that was wrapped around a Microsoft water bottle. While the notes were recorded using openCV via a Raspberry Pi 3, which was programmed in python. The result was a matrix representing each frame of notes from the Raspberry Pi camera. This array was exported to an MIDI file that could then be played. ## Challenges we ran into The openCV required a calibration method to assure accurate image recognition. The external environment lighting conditions added extra complexity in the image recognition process. The lack of musical background in the members and the necessity to decrypt the piano scroll for the appropriate note keys was an additional challenge. The image recognition of the notes had to be dynamic for different orientations due to variable camera positions. ## Accomplishments that we're proud of The device works and plays back the digitized music. The design process was very fluid with minimal set backs. The back-end processes were very well-designed with minimal fluids. Richard won best use of a sponsor technology in a technical pickup line. ## What we learned We learned how piano scrolls where designed and how they were written based off desired tempo of the musician. Beginner musical knowledge relating to notes, keys and pitches. We learned about using OpenCV for image processing, and honed our Python skills while scripting the controller for our hack. As we chose to do a hardware hack, we also learned about the applied use of circuit design, h-bridges (L293D chip), power management, autoCAD tools and rapid prototyping, friction reduction through bearings, and the importance of sheave alignment in belt-drive-like systems. We also were exposed to a variety of sensors for encoding, including laser emitters, infrared pickups, and light sensors, as well as PWM and GPIO control via an embedded system. The environment allowed us to network with and get lots of feedback from sponsors - many were interested to hear about our piano project and wanted to weigh in with advice. ## What's next for Piano Men Live playback of the system
losing
# ReactOnFly ## Inspiration Everyone wants to browse through their Facebook newsfeed, reacting to different kinds of posts. Even though clicking may seem easy, wouldn't it be easier if all you had to do was smile? We thought it'd be cool to work with Microsoft's Emotion API and with a bit of facial detection we could determine a person's happiness level and help make a natural reaction to a particular post for them. ## What it does This Chrome extension takes pictures of the user every 2 seconds, and the Microsoft Emotion API with the Microsoft Azure storage determines if the user is smiling at their screens/ at their friend's Facebook posts. If the user is recorded to be smiling for a certain duration, the Chrome Extension will automatically like the Facebook posts on the screens. -- The chrome extension will "react on fly!" ## How we built it We used PyGame to take a photo of the user every two seconds. This photo is then uploaded to **Microsoft Azure**, that uses **Microsoft Cognitive Services API to determine if the user is smiling.** If the user is smiling, it returns a true value which is then picked up by the chrome extension. The extension keeps polling the Python-Flask application every five seconds to detect any changes in emotion. For happy faces, *the Facebook post is automatically liked on behalf of the user.* ## Challenges we ran into The common programming language among our group members was Python, so we had to implement most of the functionality using this language. We planned to use OpenCV to activate the camera, but we decided not to go with it because it took too much time to build. Therefore we chose PyGame instead. We spent much time looking for the Azure SDK for Python and tried to find ways to put our application on the Azure cloud. When we finally started to consolidate all of the small pieces of our functionalities, our PyGame program stopped taking photos, but then, we fixed it soon (phew). ## Accomplishments that we're proud of We are glad that we worked together and we had such a great time. There were a lot of troubles, but we did not give up and solved them one by one. ## What we learned We learned how to use Microsoft Cognition API to analyze people's facial expressions. We also learned how to implement Facebook API to like a post automatically and develop a chrome extension. All of us had a fun time working together. ## What's next for We plan to add more functionalities in order to make it more user-friendly, based on the feedback we receive from our users
## Inspiration Our goal was to implement a social feature that would attract students to Radish's services. We were inspired by McGill's Engineering building ice cream store. They give out free ice cream when you fail an exam. ## What it does It asks for the person to submit proof that they failed, checks it, and gives out a discount code for any restaurant affiliated with Radish. ## How we built it Front-end: React Backend: Python ## Challenges we ran into Animations, connecting front-end and backend to make the feature functional. ## Accomplishments that we're proud of Used React for the first time, and general resilience. ## What we learned Image processing, React, and how to search for resources online efficiently. ## What's next for Radishes & Failures Connecting front-end with back-end, connecting to Radish's own platform, bringing comfort in failure :)
## Inspiration In the current media landscape, control over distribution has become almost as important as the actual creation of content, and that has given Facebook a huge amount of power. The impact that Facebook newsfeed has in the formation of opinions in the real world is so huge that it potentially affected the 2016 election decisions, however these newsfeed were not completely accurate. Our solution? FiB because With 1.5 Billion Users, Every Single Tweak in an Algorithm Can Make a Change, and we dont stop at just one. ## What it does Our algorithm is two fold, as follows: **Content-consumption**: Our chrome-extension goes through your facebook feed in real time as you browse it and verifies the authenticity of posts. These posts can be status updates, images or links. Our backend AI checks the facts within these posts and verifies them using image recognition, keyword extraction, and source verification and a twitter search to verify if a screenshot of a twitter update posted is authentic. The posts then are visually tagged on the top right corner in accordance with their trust score. If a post is found to be false, the AI tries to find the truth and shows it to you. **Content-creation**: Each time a user posts/shares content, our chat bot uses a webhook to get a call. This chat bot then uses the same backend AI as content consumption to determine if the new post by the user contains any unverified information. If so, the user is notified and can choose to either take it down or let it exist. ## How we built it Our chrome-extension is built using javascript that uses advanced web scraping techniques to extract links, posts, and images. This is then sent to an AI. The AI is a collection of API calls that we collectively process to produce a single "trust" factor. The APIs include Microsoft's cognitive services such as image analysis, text analysis, bing web search, Twitter's search API and Google's Safe Browsing API. The backend is written in Python and hosted on Heroku. The chatbot was built using Facebook's wit.ai ## Challenges we ran into Web scraping Facebook was one of the earliest challenges we faced. Most DOM elements in Facebook have div ids that constantly change, making them difficult to keep track of. Another challenge was building an AI that knows the difference between a fact and an opinion so that we do not flag opinions as false, since only facts can be false. Lastly, integrating all these different services, in different languages together using a single web server was a huge challenge. ## Accomplishments that we're proud of All of us were new to Javascript so we all picked up a new language this weekend. We are proud that we could successfully web scrape Facebook which uses a lot of techniques to prevent people from doing so. Finally, the flawless integration we were able to create between these different services really made us feel accomplished. ## What we learned All concepts used here were new to us. Two people on our time are first-time hackathon-ers and learned completely new technologies in the span of 36hrs. We learned Javascript, Python, flask servers and AI services. ## What's next for FiB Hopefully this can be better integrated with Facebook and then be adopted by other social media platforms to make sure we stop believing in lies.
losing
Handling personal finances can be a challenging task, and there doesn't exist a natural user experience for engaging with your money. Online banking portals and mobile apps are one-off interactions that don't help people manage their money over the long term. We solved this problem with Alex. We built Alex with the goal of making it easier to stay on top of your finances through a conversational user interface. We believe that the chatbot as a layer of abstraction over financial information will make managing budgets and exploring transactions easier tasks for people. Use Alex to look at your bank balances and account summary. See how much you spent on Amazon over the last two months, or take a look at all of your restaurant transactions since you opened your account. You can even send money to your friends. There were a few technically-challenging problems we had to solve while building Alex. We had to handle OAuth2 and other identification tokens through Facebook and bank account information to ensure security. Allowing the user to make queries in natural language required machine learning and training a model to identify different intents and parameters within a sentence. We even attempted to build a custom solution to maintain long-term memory for our bot—a still unsolved problem in natural language processing. Alex is first and foremost a consumer product, but we believe that it provides value beyond the individual. With some additions, banks could use Alex to handle their customer support, saving countless hours of phone calls and wasted time on both ends. In a business setting, banks could learn much more about their customers' behavior through interactions with Alex.
## Inspiration Our inspiration came from seeing how overwhelming managing finances can be, especially for students and young professionals. Many struggle to track spending, stick to budgets, and plan for the future, often due to a lack of accessible tools or financial literacy. So, we decided to build a solution that isn't just another financial app, but a tool that empowers individuals, especially students, to take control of their finances with simplicity, clarity, and efficiency. We believe that managing finances should not be a luxury or a skill learned through trial and error, but something that is accessible and intuitive for everyone ## What it does Sera simplifies financial management by providing users with an intuitive dashboard where they can track their recent transactions, bills, budgets, and overall balances - all in one place. What truly sets it apart is the personalization, AI-powered guidance that goes beyond simple tracking. Users receive actionable recommendations like "manage your budget" or "plan for retirement" based on their financial activity With features like scanning receipts via QR code and automatic budget updates, we ensure users never miss a detail. The AI chatbot, SeraAI, offers tailored financial advice and can even handle tasks like adding transactions or adjusting budgets - making complex financial decisions easy and stress-free. With a focus on accessibility, Sera makes financial literacy approachable and actionable for everyone. ## How we built it We used Next.js with TailwindCSS for a responsive, dynamic UI, leveraging server-side rendering for performance. The backend is powered by Express and Node.js, with MongoDB Atlas for scalable, secure data storage. For advanced functionality, we integrated Roboflow for OCR, enabling users to scan receipts via QR codes, automatically updating their transactions, Cerebras handles AI processing, powering SeraAI, our chatbot that offers personalized financial advice and automates various tasks on our platform. In addition, we used Tune to provide users with customized financial insights, ensuring a proactive and intuitive financial management experience ## Challenges we ran into Integrating OCR with our app posed several challenges, especially when using Cerebras for real-time processing. Achieving high accuracy was tricky due to the varying layouts and qualities of receipts, which often led to misrecognized data. Preprocessing images was essential; we had to adjust brightness and contrast to help the OCR perform better, which took considerable experimentation. Handling edge cases, like crumpled or poorly printed receipts, also required robust error-checking mechanisms to ensure accuracy. While Cerebras provided the speed we needed for real-time data extraction, we had to ensure seamless integration with our user interface. Overall, combining OCR with Cerebras added complexity but ultimately enhanced our app’s functionality and user experience. ## Accomplishments that we're proud of We’re especially proud of developing our QR OCR system, which showcases our resilience and capabilities despite challenges. Integrating OCR for real-time receipt scanning was tough, as we faced issues with accuracy and image preprocessing. By leveraging Cerebras for fast processing, we overcame initial speed limitations while ensuring a responsive user experience. This accomplishment is a testament to our problem-solving skills and teamwork, demonstrating our ability to turn obstacles into opportunities. Ultimately, it enhances our app’s functionality and empowers users to manage their finances effectively. ## What we learned We learned that financial education isn’t enough, people need ongoing support to make lasting changes. It’s not just about telling users how to budget; it’s about providing the tools, guidance, and nudges to help them stick to their goals. We also learned the value of making technology feel human and approachable, particularly when dealing with sensitive topics like money. ## What's next for Sera The next steps for Sera include expanding its capabilities to integrate with more financial platforms and further personalizing the user experience to provide everyone with guidance and support that fits their needs. Ultimately, we want Sera to be a trusted financial companion for everyone, from those just starting their financial journey to experienced users looking for better insights.
## Inspiration Most users do not get the optimal use of their banking account plans and they are unaware of better choices in the market. If a user does not need more than 35 transactions per month, s/he will save up to $200.00 annually without sacrificing anything. We hereby design BankingCat which analyzes user behaviour from banking statements, provide interactive conversation from ChatBot and outputs the best financial solutions across all Canadian banks. ## How we built it We use bootstrap and jQuery to beauty our frontend for landing page and chatbot window, and deploy the chatbot with natural language processing on Google's Dialogflow and connect to our backend Node.js server, which we host it on Google Cloud Platform with MongoDB. ## Challenges we ran into The problem occurs in data analysis. We have a difficult time trying to figure out the logic of analyzing banking statement. ## Accomplishments that we're proud of Our MongoDB and backend web server are successfully deployed on Google Cloud Platform. We are able to send messages to the frontend. This excites us since this is the first time we manage to host backend on Google Cloud Platform. ## What we learned Not only we gained experience on creating multiple instances of servers on Google CLoud Platform, but also obtain deeper understading on unerlying logics behind Chatbot. We train the intelligent Chatbot by harnessing the power of natural language processing. Moreover, we are familiar with Javascript as well as other web development technologies such as HTML, CSS and BootStrap. ## What's next for BankingCat We will focus on brining advanced intelligence into BankingCat's data analysis logics by introducing Azure AI and cognitive services.
partial
## Inspiration This project came about because I have very limited data on my phone plan and when I'm outdoors I still need to get access to transit information and do quick searches. ## What it does Guardian SMS allows you to access the powerful Google Assistant through SMS. You can send your searches to a phone number and receive Google Assistant's response. It also allows you to get routing information for public transit. Just send it your starting point and destination and it'll get you easy to follow instructions through Here's navigation api ## How we built it We built this tool through STDLIB's server-less API back-end service. The SMS messages are sent using MessageBird's API. Public transit routing information is obtained through Here's routing API. We access Google Assistant through the google-assistant npm package based within an express js application. ## Challenges we ran into We had difficulties setting up oauth correctly within our node application. This was necessary to access Google Assistant. Through persistent debugging and help from a very kind top hat mentor we were able to figure out how to set it up correctly. Another issue we had was in trying to deploy our node app to heroku. To execute our app we needed a package that had to be installed through apt-get and we found out that heroku doesn't support apt-get commands without third party helpers and even after we got the helper, the libasound2-dev package was unavailable. With an hour left before submission we decided to use ngrock for tunneling between our STDLIB api and our node app which was executing locally. ## Accomplishments that we're proud of We're very proud of how far we got. Every step of the way was a thrill. Getting the navigation service going through SMS was amazing and then getting google assistant to be accessible offline through text was mind blowing. ## What we learned We learned that we can accomplish a lot on 4 hours of sleep. We learned how useful STDLIB is and how powerful node js can be. ## What's next for Guardian SMS Next we want to create a supporting web app through which we want to allow other people to use this service by linking their Google accounts. We also want better navigation support so that it allows users to select between: public transit, biking, walking or driving, when they are requesting directions.
## Inspiration *"I have an old Nokia mobile phones, that doesn't have internet access nor acess to download & install the Lyft app; How can I still get access to Lyft?"* > > Allow On-Demand Services Like Uber, Lyft to be more mainstream in developing world where there is limitied to no internet access. Lyft-powered SMS. > > > ## What it does > > Have all the functionalities that a Lyft Application would have via SMS only. No wifi or any type of internet access. Functionalities include and are not limited to request a ride, set origin and destination, pay and provide review/feedback. > > > ## How I built it > > Used Google Polymer to build the front end. For the backend we used the Lyft API to take care of rides. The location/address have been sanitize using Google Places API before it gets to the Lyft API. The database is powered by MongoDB, spun off the application using Node.js via Cloud9 cloud IDE. Finally, Twilio API which allow user/client to interface with only SMS. > > > ## Challenges I ran into > > The Lyft API did not have a NodeJS wrapper so we had to create our own such that we were able to perform all the necessary functions needed for our project. > > > ## Accomplishments that I'm proud of > > Our biggest accomplishment has to be that we completed all of our objectives for this project. We completed this project such that it is in a deployable state and anybody can test out the application from their own device. In addition, all of us learned new technologies such as Google Polymer, Twilio API, Lyft API, and NodeJS. > > > ## What I learned > > Emerging markets > > > ## What's next for Lyft Offline > > We plan to polish the application and fix any bugs found as well as get approval from Lyft to launch our application for consumers to use. > > > ## Built With * Google Polymer * Node.js * Express * MongoDB * Mongoose * Passport * Lyft API & Auth * Google API & user end-points * Twilio API
## Inspiration We were inspired to create this project because of our own personal experiences with public transportation. As frequent users, we often missed important announcements or our stop due to unclear audio or distractions. This realization made us imagine how much more challenging this must be for deaf individuals who rely on visual cues to access the same information. This sparked the idea for our project - an application that uses QR codes to provide real-time transcription and categorization of the conductor's words to improve accessibility and inclusivity for deaf individuals on public transportation. We aim to bridge the communication gap and empower marginalized communities through this innovative solution. ## What it does The application works by allowing users to scan a QR code on public transportation, which will then subscribe them to a live transcription and categorization of the conductor's words. The transcription will be in real-time and users will be able to access it through their devices. This will enable individuals that have trouble understanding to have access to the same information as the hearing individuals, such as announcements, stops, and other important information. The application also provides a categorization feature that will help users to understand the nature of the announcements. ## How we built it Our application consists of two major components: ### A Python service * The service is hooked to a laptop microphone stream that is processed in real-time by Rev.ai. * The resulting close caption is then classified by Cohere to identify the type of announcement. + Next-stop announcement (e.g. next stop is Oakville GO) + Assistance announcement (e.g. coaches 26 and 27 will not be opening at Union Station) + Delay announcement (e.g. the train will be arriving 10 minutes late due to a delay at Oshawa GO) * The classification and the announcement itself are then posted to a Firestore collection. ### A React-Native mobile application * Scans a QR code (e.g. a QR code on the back of your seat on the train) * Subscribes to the Firestore collection based on the QR code you scan * Takes the latest logs from the store and displays them to users as a card that is color-coded depending on the type of announcement! 😱 + Due to it being a live data collection, Firebase will update the observer (our app) if there is new data 🎉🎉🎉 * There is also a summary feature where all the recent announcements are summarized using Cohere and displayed in an easy-to-digest format! 🫡 ## Challenges we ran into During the development of our project, we faced several challenges. One of the first challenges we encountered was setting up React Native in our development environment. This required a significant amount of time and effort to configure and troubleshoot. Another challenge we faced was with the Rev.ai API. Initially, we had difficulty figuring out how to live-stream audio, which was crucial for the real-time transcription feature of our application. Additionally, we struggled with designing our system in a way that was both practical and feasible for the demo, while also avoiding over-engineering. This required a lot of thought and experimentation to find the right balance. All in all, these challenges tested our problem-solving skills, resilience, and determination but helped us to improve and refine our project to make it better. ## Accomplishments that we're proud of We are proud of several accomplishments we achieved while working on this project. One of our main achievements was successfully implementing the live-stream transcription feature using the Rev.ai API. This was a significant challenge that required a lot of effort and determination to overcome. Additionally, we are proud of the design of our application, which is user-friendly and easy to navigate, making it accessible to all users. We also managed to implement the QR code feature, which allows the user to subscribe to the live transcription and categorization service. Additionally, we're proud of creating something that can make a real impact on accessibility and inclusivity for deaf individuals on public transportation. We believe that our project can help to bridge the communication gap and empower marginalized communities. ## What we learned During this hackathon, we learned several valuable lessons. One of the most significant things we learned was the importance of trying new technologies. We all had the opportunity to work with Firestore, which was a new library for us, and it was an amazing experience to work with it. Rev.ai was also a technology that we never thought would have existed until this hackathon. Furthermore, using Cohere opened our eyes to the never-ending capabilities of large language models. We also learned about the value of **exploring** existing concepts and finding ways to bring innovation to the table. This project allowed us to explore new ways of solving a problem that affects many people and showed us how technology can be used to empower marginalized communities and break down barriers to inclusion. Furthermore, we also learned about the importance of user-centered design and the balance between practicality and feasibility when developing a product. Overall, this hackathon was an enriching learning experience that helped us to develop our skills and gain new insights. ## What's next for Hear.Me The next steps for our project include pitching the idea to government organizations such as the Government of Ontario (GO) and the TTC. We believe that our project can make a real impact on accessibility and inclusivity for deaf individuals on public transportation, and we want to explore opportunities to implement it on a larger scale. We will reach out to these organizations and present our project, highlighting its potential benefits and discussing potential implementation strategies. Additionally, we will continue to improve and refine the application based on feedback and testing. We will also explore other potential partnerships and collaborations that could help us to bring this innovative solution to more people. Overall, our goal is to bring our application to as many public transportation systems as possible, to empower marginalized communities, and break down barriers to inclusion.
partial
### Inspiration Quizcribe is inspired by the problem that many students face in retaining the knowledge that they learned from class lectures or open-course videos. Through AI-powered summaries and interactive quizzes, we hope to make learning more effective and expand the effectiveness of self-regulated learning. ### What it does Quizcribe is an educational software that supplements students with AI-powered study resources generated from inputted videos or audio. By entering a link to a video, such as one from YouTube, Quizcribe automatically transcribes the video, returns a detailed summary of its content, and generates an interactive quiz game for knowledge testing and review. ### How we built it * Frontend: Next.js/React * Backend: Django/Python * APIs: Deepgram (Speech to Text), Google Gemini 1.5, YouTube DL ### Challenges we ran into * Extracting video URL from web URL (Resolve) * Gemini prompt design and output format had to be very precise (Resolved) * Frontend and backend connection issues with generated interactive quizzes (Resolved) ### Accomplishments that we're proud of * The summaries and interactive quizzes generated by Quizcribe is very accurate to the content of the inputted video/audio * Our team was able to extend Quizcribe's video transcription and processing service to 16 different languages ### What we learned * Fast-paced app development * Use of AI and LLM in video/audio transcription and processing * Using Next.js for frontend development * Developing REST API in Django * Version control with Git ### What's next for Quizcribe * Support for video file upload and other URL domains e.g. Coursera, Khan Academy, Zoom * Enable translation to languages other than English * Expanding support for more than 16 languages * Increasing the efficiency and accuracy of video transcription and summary/quiz generation
## Inspiration Through meeting hundreds and hundreds of learners from all over the world in our time as university students, we have slowly realized over time the variability that exists in education systems around the world. After many conversations, it is apparent that in many areas, teenagers and young adults do not have anywhere near the quality of education that one may find at an accredited four-year university-- an experience that requires thousands of dollars to have. With such a boom for artificial intelligence and research in educational best practices, we came up with a way to allow for all those who yearn for this learning to have it. As such, we sought to build an AI-powered tutoring with a human touch to create an engaging, community-based, learning experience. The goal is to help students, no matter where they are, to get real-time guidance through a platform that is not just instructional, but also one that is engaging and that pushes students to want to learn more. We believed that our combined experiences in machine learning and fullstack development was fit for the job. ## What it does Zina is a **personalized group instructor** that uses video and audio sessions to guide student learning. Students can generate course topics of their choice and be in a classroom with peers of their choice, putting the power of learning into the hands of the students. Our application supports features such as lesson plan generation, live video streaming, questions and answers. ## How we built it Tools: * Next.js + React.js * Tailwind CSS * Flask * Socket.IO * WebRTC * Deepgram * Agora Video API * Chroma DB Multimodal Data * Langchain ## Challenges we ran into One of the biggest challenges was ensuring speech to text and text to speech communication is effective and engaging. Handling real-time communication across various network speeds was tricky, and balancing performance with AI processing for live transcription and analysis was another technical hurdle. Integrating the Deepgram API and ensuring the seamless operation of AI services without lag was the main point of challenge here. Additionally, ensuring the platform worked equally well across different devices and operating systems required extensive testing and optimization. ## Accomplishments that we're proud of We are proud of the seamless integration between AI-powered transcription, real-time feedback, and live tutoring sessions. The combination of voice recognition and real-time analysis to help tutors adjust lessons based on students' needs was a significant accomplishment. ## What we learned The importance of scalability in real-time applications. Building for low latency while incorporating AI services that require heavy computational resources taught us how to optimize both frontend and backend processes in short time frames. We also gained insights into how students and tutors interact in virtual settings, helping us design a more intuitive user experience. This project also reinforced our knowledge of WebRTC and how to leverage AI to enhance human interaction in education. ## What's next for Zina Next, we hope to expand on our philosophy of quality education by engaging in continuous research in education practices to ensure that students' needs are met. Due to the limited time constraints, we were forced to cut corners, which made us all the more eager to go the full lengths to make this project complete.
## Inspiration Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played! ## What it does AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements. ## How we built it * The first step was setting up opencv and importing the a body part tracking model from google mediapipe * Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped. * Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key * We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen * Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play ## Challenges we ran into A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it ## Accomplishments that we're proud of We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project. ## What we learned We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites. We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles. We learned how to resize, move, and give priority to external windows such as the video output window We learned how to run python files from JavaScript to make automated calls to further processes ## What's next for AR.cade The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft.
losing
## Inspiration Seeing the increased government surveillance and internet crackdown in Iran, we sought to provide a platform that supports the protestors by granting them increased anonymity and privacy. ## What it does We make use of the unique properties of web3 and blockchain technology to drive a decentralized and government-free platform. ## How we built it We used Flask, HTML and CSS. ## Challenges we ran into Finding an event-sponsored blockchain framework that supports JavaScript. Developing an HTML interface for the first time. ## Accomplishments that we're proud of Creating a running website in just 24 hours, while getting enough sleep for the midterms we have coming up! ## What we learned To use Flask, HTML and CSS. ## What's next for Unrest Implement anonymous chats between protestors and anonymous funding for protests.
Inspiration Our project is driven by a deep-seated commitment to address the escalating issues of hate speech and crime in the digital realm. We recognized that technology holds immense potential in playing a pivotal role in combating these societal challenges and nurturing a sense of community and safety. ## What It Does Our platform serves as a beacon of hope, empowering users to report incidents of hate speech and crime. In doing so, we have created a vibrant community of individuals wholeheartedly devoted to eradicating such toxic behaviors. Users can not only report but also engage with the reported incidents through posts, reactions, and comments, thereby fostering awareness and strengthening the bonds of solidarity among its users. Furthermore, our platform features an AI chatbot that simplifies and enhances the reporting process, ensuring accessibility and ease of use. ## How We Built It The foundation of our platform is a fusion of cutting-edge front-end and back-end technologies. The user interface came to life through MERN stack, ensuring an engaging and user-friendly experience. The backend infrastructure, meanwhile, was meticulously crafted using Node.js, providing robust support for our APIs and server-side operations. To house the wealth of user-generated content, we harnessed the prowess of MongoDB, a NoSQL database. Authentication and user data privacy were fortified through the seamless integration of Auth0, a rock-solid authentication solution. ## Challenges We Ran Into Our journey was not without its trials. Securing the platform, effective content moderation, and the development of a user-friendly AI chatbot presented formidable challenges. However, with unwavering dedication and substantial effort, we overcame these obstacles, emerging stronger and more resilient, ready to tackle any adversity. ## Accomplishments That We're Proud Of Our proudest accomplishment is the creation of a platform that emboldens individuals to stand up against hate speech and crime. Our achievement is rooted in the nurturing of a safe and supportive digital environment where users come together to share their experiences, ultimately challenging and combatting hatred head-on. ## What We Learned The journey was not just about development; it was a profound learning experience. We gained valuable insights into the vast potential of technology as a force for social good. User privacy, effective content moderation, and the vital role of community-building have all come to the forefront of our understanding, enhancing our commitment to addressing these critical issues. ## What's Next for JustIT The future holds exciting prospects for JustIT. We envision expanding our platform's reach and impact. Plans are underway to enhance the AI chatbot's capabilities, streamline the reporting process, and implement more robust content moderation techniques. Our ultimate aspiration is to create a digital space that is inclusive, empathetic, and, above all, safe for everyone.
## We wanted to help the invisible people of Toronto, many homeless people do not have identification and often have a hard time keeping it due to their belongings being stolen. This prevents many homeless people to getting the care that they need and the access to resources that an ordinary person does not need to think about. **How** Our application would be set up as a booth or kiosks within pharmacies or clinics so homeless people can be verified easily. We wanted to keep information of our patients to be secure and tamper-proof so we used the Ethereum blockchain and would compare our blockchain with the information of the patient within our database to ensure they are the same otherwise we know there was edits or a breach. **Impact** This would solve problems such as homeless people getting the prescriptions they need at local clinics and pharmacies. As well shelters would benefit from this as our application can track the persons: age, medical visits, allergies and past medical history experiences. **Technologies** For our facial recognition we used Facenet and tensor flow to train our models For our back-end we used Python-Flask to communicate with Facenet and Node.JS to handle our routes on our site. As well Ether.js handled most of our back-end code that had to deal with our smart contract for our blockchain. We used Vue.JS for our front end to style our site.
losing
## Inspiration Because many language-learning applications are geared towards a specific kind of user, we wanted to make an application that takes into account people's various affinities for different methods of learning. ## What it does The app asks the user a series of questions to discern their learning type, then recommends them language-learning exercises based on their learning type. The user is free to switch between these types to find the one which best suits them. ## How we built it We built this app using Flutter, written in Dart. Original art and other assets were all created using Clip Studio Paint. Prototyping was done in Figma, while testing was done using Android Studio emulators. A few free icons created by Kiranshastry, Roundicons, and Freepik were used for the learner quiz result pages. ## Challenges we ran into Setting up flutter was challenging and Kacey's computer had issues with memory and Android Studio. The learning curve for Dart and Flutter was quite steep. We lack experience in handling user data. ## Accomplishments that we're proud of Getting flutter to work, getting the application to look mostly the way we want it to. ## What we learned How to use Flutter & Dart, how to prototype (on Figma and on paper), how to set up an emulator using Android Studio. ## What's next for Language Like You Continuing development and furthering our skills in Dart & Flutter.
## Inspiration for Green Been A Yale Research Team found that organic waste occupied the largest percentage of what’s being dumped into landfills. Compost in landfills is not only harmful environmentally, but also economically. For example, it costs New York City over $400 million to collect and dispose the trash it collects, but they could be saving millions as the majority of the trash is actually organic matter. Organic material brought to landfills end up rotting and releasing methane which is not ideal as methane is more than 25 times as potent as carbon dioxide (epa.gov). On the other hand, composting brings carbon to the soil and limits the amount of methane entering air. Getting into composting can be daunting and part of the process is knowing what to compost - Green Been was created to solve this problem. ## What it does Green Been uses a custom trained Tensorflow deep learning model to predict whether or not you should compost your item. The user simply uploads an image and Green Been predicts (with nearly perfect accuracy) whether the user should compost their item. The model takes into account many factors, for example, paper can be both composted and recycled, but it is overall more economically/environmentally efficient to recycle. ## How I built it I built Green Been using * Tensorflow to predict whether the user should compost their item * HTML, CSS, and javascript for the frontend of the page * Python for backend * Flask and Heroku for deployment ## Challenges I ran into I faced many challenged creating this project, some of which include finding good data to train the model, researching how to use Flask to deploy a Tensorflow model, and tuning the model to accurately predict compost/trash. ## Accomplishments that I'm proud of I am proud to have created a fully functioning web application with an everyday use case that could help both environmentally and economically. I am also proud to have successfully deployed Tensorflow through flask for the first time. ## What I learned I learned a lot about flask, and how to deploy machine learning models using it. Additionally, I learned how to better use HTML forms and CSS. ## The future of Green Been Althought Green Been already can be used on IPhones/androids (via the browser), I plan to create an app, so that Green Been will be more easily usable. Additionally, I plan on further improving the model's accuracy with more fine-tuning/data.
## Inspiration Conventional language learning apps like Duolingo don’t offer the ability to have freeform and dynamic conversations. Additionally, finding a language partner can be difficult and costly. Lingua Franca tackles this head-on by offering intermediate to advanced language learners an immersive, interactive experience. Although other apps exist that try to do the same thing, their interaction topics are hard-coded, meaning that you encounter yourself in the same dialogue over and over again. By leveraging LLMs, we’re able to ensure that no two experiences are the same! ## What it does You stumble into a foreign land and must communicate with the townsfolk in order to get by. As you talk with them, you must reply by recording yourself speaking in their language. Aided by LLMs, their responses dynamically change depending on what you say. Additionally, at some points in the conversation, they will give you checkpoints that you must accomplish, which encourages you to talk to other villagers. After each of your responses, you can also see alternative phrases you could’ve said in response to the villager. Seeing these alternative responses can aid in learning vocabulary, grammar, and can help the user branch outside of their usual go-to phrases in the language they are learning. Not only can you guide the conversation to whatever topic you’d like to practice, but to keep the user engaged, we’ve also added backstory to the characters in the village. Each time you talk with them, you can learn something more about their relationship with others in the village! ## How we built it Development was done in Unity3D. We used Wit.ai to capture and transcribe the user’s recorded responses. Those transcribed responses were then fed into an LLM from Together.ai, along with extra information to give context and guide the LLM to prompt the user to complete checkpoints. The response from the LLM becomes the villager’s response to the player. We created the world using assets from Unity Asset store, and the character models are from Mixamo. ## What we learned Developing in VR was new to all team members, so developing for the Oculus Quest and using Unity3D was a great learning experience. LLMs aren’t perfect, and working to mitigate poor, harmful, or unproductive responses is difficult. However, we took this challenge seriously while working on this app and carefully tuned our prompts to give the model the context it needed to avoid these situations. ## What's next for Lingua Franca The next steps for this app include: Adding more languages adding audio feedback from the villagers as an addition to text responses adding new locations, characters, and worlds for more variation in the experience.
losing
## Inspiration Social media today is focused on connecting users who share similar friend circles and interests. The recommendations that show up on users' feeds are geared towards their current beliefs, leading to increasingly polarized iterations of the information in the hopes of keeping them online for extended periods of time. This way of connecting users results in "echo chambers" of experiences, ideologies, and culture. Common Grounds is fundamentally built with the purpose of connecting users who might not share the traits that traditional social media looks for in potential connections, but who could still be good friends given the opportunity. ## Additional Background Info Traditional social media is designed around the idea of a centralized network. Users accrue an increasing number of followers and likes. In recent years this has led to the rise of influencers, people who amass large followings on social media and thus have a disproportionate influence on others despite their knowledge and credibility on any given topic. This means that a retweet or post by a prominent figure on a supposed rumor could lead to its rapid spread into a common misperception. Common Grounds is aimed at building an egalitarian network of connections where one accrues "friends" not because they already have a large following or because they're a model, but because of the merit of their ideas. Further, because there are no followers or likes, users can focus on building meaningful connections in a stress-free environment. ## What it does A social platform that uses OpenAI’s GPT-3 language prediction model to generate prompts designed to **spark conversation**, and to form connections between people with seemingly **differing** opinions. * ML-generated questions and follow-up prompts * Smart matching to pair users with differing opinions * Video calling + option to mute/unmute * Option to add & remove friends * Dashboard to view weekly stats Because there's no search feature for friends, no publicly-viewable number of followers, and therefore an absence of influencers, users build authentic relationships in an environment where there isn't pressure to increase their numbers of followers or likes. ## How we built it Common Grounds is composed of two main components: a React frontend and a Python backend server. On the frontend, we use Firebase Auth for login, Twilio Video for video calling, and WebSockets for live, bidirectional client-server communication. Our frontend uses the NextJS React framework and is deployed to Vercel. On the backend, we used the AIOHTTP Python library to serve HTTP and Websocket requests, Firestore for data persistence, Twilio Video for video calling, and OpenAI GPT-3 for intelligent discussion prompt generation. Our backend is deployed to Azure web apps. ## Challenges we ran into * Designing a cohesive user experience * Deploying the backend server and setting up SSL * Complex state management and WebSocket connection issues on the frontend ## What's next for Common Grounds * Closed Captioning: for increased accessibility for those who may be deaf or hearing-impaired could be extended to live language translation to increase diversity of users * Direct Messaging: allow users to message their connections, to plan times to continue their conversations * More Sophisticated Matching & Prompts: over time, learn what type of matches yield the most meaningful discussion based on statistics such as duration of call and friend rate
## Inspiration Sustainability is one of the core pillars of modern progress. We wanted to address this challenge by thinking about how we could allow for substantial improvement in sustainability by optimizing an existing system. That's why we landed on LLMs: their **meteoric rise in popularity** has changed the way millions of people search for and learn information. That being said, LLMs are **extremely inefficient**when it comes to the compute required for inference. With hundreds of millions of people relying on them for day-to-day searches, it is evident that we have reached a scale where **sustainability needs to be carefully considered**. We asked ourselves, how can we make LLMs more sustainable? Can we quantify that cost so users can understand how many resources they use/save? The key to the idea is the fact that we wanted to propose a way to **dramatically improve sustainability with almost zero-effort required** from the user's side. These are the principles that make our proposal both practical and most impactful. ## What it does In essence, we leverage **vector embeddings to make LLMs more sustainable**. Everyday, just on chatGPT, there are over 10 million queries made. Even over a small period of time, query overlap is inevitable. Currently, LLMs run inference on every single query. This is unnecessary, especially when it comes to objective queries that are similar to one another. Instead of relying on inference by default, we **rely on vector-based similarity search** first. This approximately takes **1/15 of the compute** that normal compute would take per query on chatGPT. Now, what makes LLMs desirable is their customization of responses. We didn’t want to lose this vital component by solely relying on embedded vector search. Thus, we give the user an option if they would like more information, and this defaults to a traditional LLM query. Thus, our approach allows for sustainability that is orders of magnitude higher than before, **without compromising what people like most about LLMs**. ## How we built it For embeddings and our vector database, we used Pinecone. Our app is created with NextJS (ReactJS, TailwindCSS, NodeJS). We utilize the OpenAI api for traditional query requests. For our similarity search, we used cosine-similarity, and given that a query crosses our significance threshold, we return top 0-3 such queries for any given user input. ## Challenges we ran into This was our first time working with embeddings and a vector database. Thus, we had some issues with setup and adding a new embedding to the overall vector space. We wanted the space to be dynamic so that answers generated for users can be shared by all users if someone were to ask a similar query in the future. Other than that, integrating all the required APIs was a challenge as some functions were async while others weren’t which caused state-update issues. Luckily, with some debugging, we were able to sort it out. ## Accomplishments that we're proud of Our final version is a **fully-functional prototype** of our idea. We are also astonished by the real statistics behind the resources our system can potentially save. Additionally, we took UI extremely seriously because we wanted a system that was **intuitive and appealing** for users to use. We also wanted a clear way for them to see the benefit of using our platform. We believe we have accomplished this in a simple, yet capable UI experience. ## What we learned We learned about how to use vector embeddings for similarity search. We also learned how to tweak the confidence threshold such that the relevant responses actually match the queries we are looking for. Above all else, we learned just how many resources are used in day-to-day usage of ChatGPT. When starting this project, we had a prediction about LLM resource consumption, but we completely underestimated just how large it would be. These learnings made us realize that **our project can have even more impact** than we had anticipated. ## What's next for SustainLLM We want to take the same processes and **apply them to other modalities** like audio and image generation. These modalities require significantly more compute than text generation, and if we could save even a small percentage of that compute, it could lead to drastic results. We are aware that creativity is a pivotal part of audio and image generation, and so we would use embeddings for lower-level things such as different pixel patterns or phonetics. That way, each generation can still be unique while consuming fewer resources. **Let’s save the environment, one LLM query at a time :)**
**Inspiration** The inspiration behind Block Touch comes from the desire to create an interactive and immersive experience for users by leveraging the power of Python computer vision. We aim to provide a unique and intuitive way for users to navigate and interact with a simulated world, using their hand movements to place blocks dynamically. **What it does** Block Touch utilizes Python computer vision to detect and interpret hand movements, allowing users to navigate within a simulated environment and place blocks in a virtual space. The application transforms real-world hand gestures into actions within the simulated world, offering a novel and engaging user experience. **How we built it** We built Block Touch by combining our expertise in Python programming and computer vision. The application uses computer vision algorithms to analyze and interpret the user's hand movements, translating them into commands that control the virtual world. We integrated libraries and frameworks to create a seamless and responsive interaction between the user and the simulated environment. **Challenges we ran into** While developing Block Touch, we encountered several challenges. Fine-tuning the computer vision algorithms to accurately recognize and interpret a variety of hand movements posed a significant challenge. Additionally, optimizing the application for real-time responsiveness and ensuring a smooth user experience posed technical hurdles that we had to overcome during the development process. **Accomplishments that we're proud of** We are proud to have successfully implemented a Python computer vision system that enables users to control and interact with a simulated world using their hand movements. Overcoming the challenges of accurately detecting and responding to various hand gestures represents a significant achievement for our team. The creation of an immersive and enjoyable user experience is a source of pride for us. **What we learned** During the development of Block Touch, we gained valuable insights into the complexities of integrating computer vision into interactive applications. We learned how to optimize algorithms for real-time performance, enhance gesture recognition accuracy, and create a seamless connection between the physical and virtual worlds. **What's next for Block Touch** In the future, we plan to expand the capabilities of Block Touch by incorporating more advanced features and functionalities. This includes refining the hand gesture recognition system, adding new interactions, and potentially integrating it with virtual reality (VR) environments. We aim to continue enhancing the user experience and exploring innovative ways to leverage computer vision for interactive applications.
winning
## Inspiration We really are passionate about hardware, however many hackers in the community, especially those studying software-focused degrees, miss out on the experience of working on projects involving hardware and experience in vertical integration. To remedy this, we came up with modware. Modware provides the toolkit for software-focused developers to branch out into hardware and/or to add some verticality to their current software stack with easy to integrate hardware interactions and displays. ## What it does The modware toolkit is a baseboard that interfaces with different common hardware modules through magnetic power and data connection lines as they are placed onto the baseboard. Once modules are placed on the board and are detected, the user then has three options with the modules: to create a "wired" connection between an input type module (LCD Screen) and an output type module (knob), to push a POST request to any user-provided URL, or to request a GET request to pull information from any user-provided URL. These three functionalities together allow a software-focused developer to create their own hardware interactions without ever touching the tedious aspects of hardware (easy hardware prototyping), to use different modules to interact with software applications they have already built (easy hardware interface prototyping), and to use different modules to create a physical representation of events/data from software applications they have already built (easy hardware interface prototyping). ## How we built it Modware is a very large project with a very big stack: ranging from a fullstack web application with a server and database, to a desktop application performing graph traversal optimization algorithms, all the way down to sending I2C signals and reading analog voltage. We had to handle the communication protocols between all the levels of modware very carefully. One of the interesting points of communication is using neodymium magnets to conduct power and data for all of the modules to a central microcontroller. Location data is also kept track of as well using a 9-stage voltage divider, a series circuit going through all 9 locations on the modware baseboard. All of the data gathered at the central microcontroller is then sent to a local database over wifi to be accessed by the desktop application. Here the desktop application uses case analysis to solve the NP-hard problem of creating optimal wire connections, with proper geometry and distance rendering, as new connections are created, destroyed, and modified by the user. The desktop application also handles all of the API communications logic. The local database is also synced with a database up in the cloud on Heroku, which uses the gathered information to wrap up APIs in order for the modware hardware to be able to communicate with any software that a user may write both in providing data as well as receiving data. ## Challenges we ran into The neodymium magnets that we used were plated in nickel, a highly conductive material. However magnets will lose their magnetism when exposed to high heat and neodymium magnets are no different. So we had to extremely careful to solder everything correctly on the first try as to not waste the magnetism in our magnets. These magnets also proved very difficult to actually get a solid data and power and voltage reader electricity across due to minute differences in laser cut holes, glue residues, etc. We had to make both hardware and software changes to make sure that the connections behaved ideally. ## Accomplishments that we're proud of We are proud that we were able to build and integrate such a huge end-to-end project. We also ended up with a fairly robust magnetic interface system by the end of the project, allowing for single or double sized modules of both input and output types to easily interact with the central microcontroller. ## What's next for ModWare More modules!
## Inspiration We're avid hackers, and every hack we've done thus far has involved hardware. The hardest part is always setting up communication between the various hardware components it's -- like reinventing the internet protocol everytime you make an IoT device. Except the internet protocol is beautiful, and your code is jank. So we decided we'd settle that problem once and for all, both for out future hacks and for hackers in general. ## What it does Now, all the code needed for an IoT device is python. You write some python code for your computer, some for the microcontroller, and we seamlessly integrate between them. And we help predict how much better your code base is as a result. ## How we built it Microcontrollers, because they are bare-metal, don't actually run python. So we wrote a python transpiler that automatically converts python code into bare-metal-compliant C code. Then we seamlessly, securely, and efficiently transfer data between the various hardware components using our own channels protocol. The end result is that you only ever need to look at python. Based on that and certain assumptions of usage, we model how much we are able to improve your coding experience. ## Challenges we ran into We attempted to implement a full lexical analyzer in its complex, abstract glory. That was a mistake. ## Accomplishments that we're proud of Because the set of languages described by regex is basically equivalent to the set of all decidable problems in computer science, we used regex in place of the lexical analyzer, which was pretty interesting. More generally, however, this was a big project with many moving parts, and a large code base. The fact that our team was able to put everything together, get things done, and come up with creative solutions on the fly was fantastic. ## What we learned Organization with tools like trello is important. Compilers are complex. Merging disparate pieces of interlocking code is a difficult but rewarding process. And many miscellaneous python tips and tricks. ## What's next for Kharon We intend to keep updating the project to make it more robust, general, and power. Potential routes for this include more depth in the theory of the field, integrating more AI, or just commenting our code more thoroughly so others can understand it. This project will be useful to us and other hardware hackers in the future! -- that's why we'll keep working on this :)
## Inspiration As students with busy lives, it's difficult to remember to water your plants especially when you're constantly thinking about more important matters. So as solution we thought it would be best to have an app that centralizes, monitors, and notifies users on the health of your plants. ## What it does The system is setup with 2 main components hardware and software. On the hardware side, we have multiple sensors placed around the plant that provide input on various parameters (i.e. moisture, temperature, etc.). Once extracted, the data is then relayed to an online database (in our case Google Firebase) where it's then taken from our front end system; an android app. The app currently allows user authentication and the ability to add and delete plants. ## How we built it **The Hardware**: The Hardware setup for this hack was reiterated multiple times through the hacking phase due to setbacks of the hardware given. Originally we planned on using the Dragonboard 410c as a central hub for all the sensory input before transmitting it via wifi. However, the Dragonboard taken by the hardware lab had a corrupted version of windows iot which meant we had to flash the entire device before starting. After flashing, we learned that dragonboards (and raspberry Pi's) lack the support for analog input meaning the circuit required some sort of ADC (analog to digital converter). Afterwards, we decided to use the ESP-8266 wifi boards to send data as it better reflected the form factor of a realistic prototype and because the board itself supports analog input. In addition we used an Arduino UNO to power the moisture sensor because it required 5V and the esp outputs 3.3V (Arduino acts as a 5v regulator). **The Software**: The app was made in Android studios and was built with user interaction in mind by having users authenticate themselves and add their corresponding plants which in the future would each have sensors. The app is built with scalability in mind as it uses Google Firebase for user authentication, sensor datalogging, ## Challenges we ran into The lack of support for the Dragonboard left us with many setbacks; endless boot cycles, lack of IO support, flashing multiple OSs on the device. What put us off the most was having people tell us not to use it because of its difficultly. However, we still wanted to incorporate in some way. ## Accomplishments that we're proud of * Flashing the Dragonboard and booting it with Windows IOT core * a working hardware/software setup that tracks the life of a plant using sensory input. ## What we learned * learned how to program the dragonboard (in both linux and windows) * learning how to incorporate Firebase into our hack ## What's next for Dew Drop * Take it to the garden world where users can track multiple plants at once and even support a self watering system
winning
## Inspiration With COVID-19 forcing many public spaces and recreational facilities to close, people have been spending time outdoors more than ever. It can be boring, though, to visit the same places in your neighbourhood all the time. We created Explore MyCity to generate trails and paths for London, Ontario locals to explore areas of their city they may not have visited otherwise. Using machine learning, we wanted to creatively improve people’s lives. ## Benefits to Community There are many benefits to this application to the London community. Firstly, this web app encourages people to explore London and can result in accessing city resources and visiting small businesses. It also motivates the community to be physically active and improve their physical and mental health. ## What it does The user visits the web page and starts the application by picking their criteria for what kind of path they would like to walk on. The two criteria are 1) types of attractions, and 2) distance of the desired path. From the types of attractions, users can select whether they would like to visit trails, parks, public art, and/or trees. From the distance of their desired path, users can pick between ranges of 1-3, 3-5, 5-7, and 7-10 kilometres. Once users have specified their criteria, they click the Submit button and the application displays a trail using a GoogleMaps API with the calculated trail based on the criteria. The trail will be close in length to the input number of kilometres. Users can also report a maintenance issue they notice on a path by using the dropdown menu on the home page to report an issue. These buttons lead to Service London’s page where issues are reported. ## How we built it The program uses data from opendata.london.ca for the types of attractions and their addresses or coordinates and use them as .csv files. A python file reads the .csv files, parses each line for the coordinates or address of each attraction for each file, and stores it in a list. The front-end of the web app was made using HTML pages. To connect the front and back-ends of the web app, we created a Flask web framework. We also connected the GoogleMaps API through Flask. To get user input, we requested data through Flask and stored them in variables, which were then used as inputs to a python function. When the app loads, the user’s current location is retrieved through the browser using geolocation. Using these inputs, the python file checks which criteria the user selected and calls the appropriate functions. These functions calculate the distance between the user’s location and the nearest attraction; if the distance is within the distance input given, the attraction is added as a stop on the user’s path until no more attractions can be added. This list of addresses and coordinates is given to the API, which displays the GoogleMaps route to users. ## Challenges we ran into A challenge we ran into was creating the flask web framework; our team had never done this before so it took some time to learn how to implement it properly. We also ran into challenges with pulling python variables into Javascript. Using the API to display information was also a challenge because we were unfamiliar with the APIs and had to spend more time learning about them. ## Accomplishments that we're proud of An accomplishment we’re proud of is implementing the GoogleMaps API and GeocodingAPI to create a user-friendly display like that of GoogleMaps. This brought our app to another level in terms of display and was an exciting feature to include. We were also proud of the large amounts of data parsing we did on the government’s data and how we were able to use it so well in combination with the two APIs mentioned above. A big accomplishment was also getting the Flask web framework to work properly, as it was a new skill and was a challenge to complete. ## What we learned In this hack, we learned how to successfully create a Flask web framework, implement APIs, link python files between each other, and use GitHub correctly. We learned that time management should be a larger priority when it comes to timed hackathons and that ideas should be chosen earlier, even if the idea chosen is not the best. Lastly, we learned that collaboration is very important between team members and between mentors and volunteers. ## What's next for Explore MyCity Explore MyCity’s next steps are to grow its features and expand its reach to other cities in Ontario and Canada. Some features that we would add include: * An option to include recreational (ie. reaching a tennis court, soccer field, etc.) and errand-related layovers; this would encourage users to take a walk and reach the desired target * Compiling a database of paths generated by the program and keeping a count of how many users have gone on each of these paths. The municipality can know which paths are most used and which attractions are most visited and can allocate more resources to those areas. * An interactive, social media-like feature that gives users a profile; users can take and upload pictures of the paths they walk on, share these with other local users, add their friends and family on the web app, etc.
## Inspiration I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic. ## What it does * You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data * Universally Generated QR Code after the input of information * To do list prior to travel to increase comfort and organisation * Travel itinerary and calendar synced onto the app * Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination * Tourism section with activities to do in a city ## How we built it Project was built using Google QR-code APIs and Glideapps. ## Challenges we ran into I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google. ## Accomplishments that we're proud of I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed. ## What we learned I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience. ## What's next for FlightBAE Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many.
## Inspiration We all moved to Hamilton from around Ontario, and were looking for things to do when we first started at McMaster. Unfortunately whenever we found fun activities they were often too far (for student without cars) or the weather was terrible and we were unable to attend. Hamilton Tourism cites a need for 'needs based usage' where under-utilised facilities are better advertised and then used by tourists and locals. ## CHALLENGES TD Connected Communities Challenge - Innovation Factory Challenge Defasco - Internet of Things ## What it does Our app works by providing users with a personalised list of activities based on the current weather and the location. After a user selects an activity and location, they are directed to the location via google maps. ## How we built it At first, we wanted to build something with React, because that’s what we knew as front-end developers, however we decided that a mobile app would be a better choice. We learned react native while building the front end, and included libraries to help with functionality, such as axios for reaching out to the web, and react navigation for navigating the app. ## Challenges we ran into The biggest challenge for our project was connecting the front and back end of our project. We had difficulty using the amazon web services and getting certification for our domain. ## Accomplishments that we are proud of We learned a lot from this project. On the back end we learned how to work with Google's maps API as well as the Dark Sky and Hamilton Data APIs. On the front-end we learned about react-native and how to optimise the usage of libraries. Overcoming the challenge of connecting the front and back end together was our greatest accomplishment. ## What's next for PARCC We are looking to provide even more personalisation as we continue to develop. Using Beautiful Soup to web-scrape for current event, and to allow users to filter the activities based on interests. As we collect more data we will be able to offer a better experience to users and give tourism information to the city Hamilton.
partial
## Overview We made a gorgeous website to plan flights with Jet Blue's data sets. Come check us out!
## Inspiration As Engineering students, we tend to spend a lot of our time outside classes in the library working away on assignments. In the days following, we might even brag about how much time we spent working, and how much we were able to get done. However, up until now, there wasn't any concrete way to show this and compare to one another in this sudo competition. ## What it does We created a web app that automatically tracks how long you spend studying in the library so that you can compete with friends and even make it to the top of the Study Spot leaderboard! ## How we built it Your location is automatically detected using the Google Nearby Places API which biases its results to detect if a library is within a set radius from you. Then once your timer has started, it tracks your score and how long you've been studying for. Once you stop studying (by clicking the "stop studying" button or simply leaving the location), the score is then sent to the back-end database. These scores are compared and ultimately culminated into a leaderboard screen. ## Challenges we ran into A lot of the team did not have previous experience with APIs. So naturally, we ran into a couple issue when working with the Verbwire API. We were, however, able to find a workaround to be able to ensure that our code is how we hoped! ## What we learned Colin - Learning the basics of backend development and API calls Jinwoo - Getting familiar with React.js Jingyue - Front-end design (tailwind.css), React.js Yax - How to use Firebase and Next.js authentication ## What's next for Study Spot We have a couple of ideas for the development of study spot. The first one being to instead of using the NFT mints individually and displaying them, we hope to mint unique NFTs for each individual user. Another idea we hope to implement is a "search" feature, which can help the user navigate to the nearest library, which is particularly useful when working in a new location / area you might now know very well.
## Inspiration [Nam's jetBlue Flight Price Tracker](http://flights.kevinnam.me/jetblue/) is the perfect website for users who want to regularly checkup on deals on flights in one easy and simple page. Other sites are often too hectic and messy, pushing users and potential customers away. It is usually by word of mouth or stroke of luck in social media content that users will be exposed to good flight deals. We wanted to facilitate information on to one easy and slick page where potential customers can see the lowest prices to various destinations at a glance. ## What it does **Nam's jetBlue Flight Price Tracker** is a niche website that displays flight ticket prices for various airport destinations from various airport origins as offered by jetBlue. The goal of the site is to keep users up to date with potential deals on cheap flight tickets and lead users straight to jetBlue's checkout page. Cards are shown for a given destination airport with a list of budget prices for a given date range. The prices in green indicate the cheapest flight tickets. Users can also scroll horizontally to view more prices (if there are). ## How we built it The web application was built using **Node.js** with the **Express Framework**. It utilizes the given data set to generate prices from origin to destination as offered by jetBlue. All pages are dynamically created with **Hogan.js**, alongside a dynamic URL, allowing users to search for flight deals from almost any jetBlue-supported origin flight. The page itself is fully responsive for both desktop, tablet and phone users. ## Challenges we ran into The biggest challenges was dealing with the given data set. Designing the structure of the map to organize all our data was the key to succeeding in this project. ## Accomplishments that we're proud of The clean slick look of it and the clean slick backend map behind it. We even implemented when the International Space Station would be over a certain destination! ## What we learned An organized and thoughtful architecture is what makes or breaks an application. ## What's next for Nam's jetBlue Flight Price Tracker Future iterations of the web application would include adding email subscriptions/notifications to deals, getting the most up to date flight information straight from the jetBlue API and including relevant hotel/lodging prices and deals to the desired destination.
partial
## Inspiration 1. Affordable pet doors with simple "flap" mechanisms are not secure 2. Potty trained pets requires the door to be manually opened (e.g. ring a bell, scratch the door) ## What it does The puppy *(or cat, we don't discriminate)* can exit without approval as soon as the sensor detects an object within the threshold distance. When entering back in, the ultrasonic sensor will trigger a signal that something is at the door and the camera will take a picture and send to the owner's phone through a web app. The owner may approve or deny the request depending on the photo. If the owner approves the request, the door will open automatically. ## How we built it Ultrasonic sensors relay the distance from the sensor to an object to the Arduino, which sends this signal to Raspberry Pi. The Raspberry Pi program handles the stepper motor movement (rotate ~90 degrees CW and CCW) to open and close the door and relays information to the Flask server to take a picture using the Kinect camera. This photo will display on the web application, where an approval to the request will open the door. ## Challenges we ran into 1. Connecting everything together (Arduino, Raspberry Pi, frontend, backend, Kinect camera) despite each component working well individually 2. Building cardboard prototype with limited resources = lots of tape & poor wire management 3. Using multiple different streams of I/O and interfacing with each concurrently ## Accomplishments that we're proud of This was super rewarding as it was our first hardware hack! The majority of our challenges lie in the camera component as we're unfamiliar with Kinect but we came up with a hack-y solution and nothing had to be hardcoded. ## What we learned Hardware projects require a lot of troubleshooting because the sensors will sometimes interfere with eachother or the signals are not processed properly when there is too much noise. Additionally, with multiple different pieces of hardware, we learned how to connect all the subsystems together and interact with the software components. ## What's next for PetAlert 1. Better & more consistent photo quality 2. Improve frontend notification system (consider push notifications) 3. Customize 3D prints to secure components 4. Use thermal instead of ultrasound 5. Add sound detection
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
## Inspiration We wanted to create a website that provides both entertainment for people and contributes to social good. We were inspired by both the Jackbox game Tee K.O. and the website freerice.com in that every time a game is completed an item of clothing is donated to people who cannot afford clothing. ## What it does Our overall idea for the project was that the website would host a game room for people to compete in T-shirt designing. The player would be able to submit multiple drawings using our whiteboard and a few slogans to be randomly matched with the drawings. The T-shirt designs from the participants, complete with the design and the slogan, would be randomly matched up against each other in tournament style. The participants except the designers themselves would then vote for the best drawing in each round. The winner then has the opportunity to donate an item of clothing under their name to a person in need free of charge. The cost would be covered by our possible sponsors and/or ad revenue. Currently, the features that we can showcase are: * The website and its design, completed with HTML/CSS and JavaScript using Bootstrap. * The whiteboard that the users can use to submit two drawings and a form that can be used to send five slogans, created with HTML/CSS, JavaScript, and Express.js. ## What's next for ChampionShirt We hope to complete our proposed features and make the website functional, where users can expect to participate in a fun game against other people in designing silly T-shirts and also bring good in our society. Once our website is complete, we're hoping to obtain sponsors and insert ads to be able to provide clothing for the people who cannot afford them.
winning
## Inspiration Lots of applications require you to visit their website or application for initial tasks such as signing up on a waitlist to be seen. What if these initial tasks could be performed at the convenience of the user on whatever platform they want to use (text, slack, facebook messenger, twitter, webapp)? ## What it does In a medical setting, allows patients to sign up using platforms such as SMS or Slack to be enrolled on the waitlist. The medical advisor can go through this list one by one and have a video conference with the patient. When the medical advisor is ready to chat, a notification is sent out to the respective platform the patient signed up on. ## How I built it I set up this whole app by running microservices on StdLib. There are multiple microservices responsible for different activities such as sms interaction, database interaction, and slack interaction. The two frontend Vue websites also run as microservices on StdLib. The endpoints on the database microservice connect to a MongoDB instance running on mLab. The endpoints on the sms microservice connect to the MessageBird microservice. The video chat was implemented using TokBox. Each microservice was developed one by one and then also connected one by one like building blocks. ## Challenges I ran into Initially, getting the microservices to connect to each other, and then debugging microservices remotely. ## Accomplishments that I'm proud of Connecting multiple endpoints to create a complex system more easily using microservice architecture. ## What's next for Please Health Me Developing more features such as position in the queue and integrating with more communication channels such as Facebook Messenger. This idea can also be expanded into different scenarios, such as business partners signing up for advice from a busy advisor, or fans being able to sign up and be able to connect with a social media influencer based on their message.
## Inspiration Rates of patient nonadherence to therapies average around 50%, particularly among those with chronic diseases. One of my closest friends has Crohn's disease, and I wanted to create something that would help with the challenges of managing a chronic illness. I built this app to provide an on-demand, supportive system for patients to manage their symptoms and find a sense of community. ## What it does The app allows users to have on-demand check-ins with a chatbot. The chatbot provides fast inference, classifies actions and information related to the patient's condition, and flags when the patient’s health metrics fall below certain thresholds. The app also offers a community aspect, enabling users to connect with others who have chronic illnesses, helping to reduce the feelings of isolation. ## How we built it We used Cerebras for the chatbot to ensure fast and efficient inference. The chatbot is integrated into the app for real-time check-ins. Roboflow was used for image processing and emotion detection, which aids in assessing patient well-being through facial recognition. We also used Next.js as the framework for building the app, with additional integrations for real-time community features. ## Challenges we ran into One of the main challenges was ensuring the chatbot could provide real-time, accurate classifications and flagging low patient metrics in a timely manner. Managing the emotional detection accuracy using Roboflow's emotion model was also complex. Additionally, creating a supportive community environment without overwhelming the user with too much data posed a UX challenge. ## Accomplishments that we're proud of ✅deployed on defang ✅integrated roboflow ✅integrated cerebras We’re proud of the fast inference times with the chatbot, ensuring that users get near-instant responses. We also managed to integrate an emotion detection feature that accurately tracks patient well-being. Finally, we’ve built a community aspect that feels genuine and supportive, which was crucial to the app's success. ## What we learned We learned a lot about balancing fast inference with accuracy, especially when dealing with healthcare data and emotionally sensitive situations. The importance of providing users with a supportive, not overwhelming, environment was also a major takeaway. ## What's next for Muni Next, we aim to improve the accuracy of the metrics classification, expand the community features to include more resources, and integrate personalized treatment plans with healthcare providers. We also want to enhance the emotion detection model for more nuanced assessments of patients' well-being.
## ⭐ Inspiration We've all been there—sitting in a quiet room with a mountain of notes, textbooks sprawled open, and suddenly a nagging question pops into your mind. You're tempted to pick up your phone, but you know one search might lead to an hour on social media. Then there's that longing for a little treat after a focused study session. Cue the birth idea of the "Study BUDD-E", a result of collective student experiences, caffeine highs, and a dash of techy magic. ## 🍭 What it does Introducing "Study BUDD-E", not just a study companion, but your own personal Q&A machine: 1. **Concentration Tracking**: Through its advanced sensors, Study BUDD-E is in sync with your study dynamics. Detecting your reading, typing, or pondering moments, it differentiates between genuine focus and those wandering-mind intervals. 2. **Question & Answer Buddy:** Hit a snag? Unsure about a concept? Just ask! With a vast database and smart processing, "Study BUDD-E" provides you with answers to your academic queries. No need to browse the web and risk distractions. Your BUDD-E's got your back. 3. **Study Stats:** After wrapping up your study session, prepare for some insights! "Study BUDD-E" showcases stats on your concentration levels, time well spent, and moments of diversion, helping you understand your study patterns and where you can improve. 4. **Reward System:** All work and no play makes Jack a dull boy. For every successful, focused study session, "Study BUDD-E" cheers you on by dispensing a sweet candy treat. Your academic achievements, no matter how small, deserve a sweet celebration. In a digital age where every beep and buzz can sidetrack our study mojo, "Study BUDD-E" stands as a beacon of focus, ensuring you stay on track, get answers in real-time, and celebrate the small victories. ## 🔧 How we built it Building the "Study BUDD-E" was a blend of robotics, software development, and a sprinkle of sweet creativity. Here's a behind-the-scenes look at our construction journey: 1. **Robot Base:** At the heart of "Study BUDD-E" is the Viam rover platform. We chose it for its robustness and flexibility. This provided a solid foundation upon which we could customize and bring our sweet-treat-giving academic aide to life. 2. **The Add-ons:** 3. **Camera:** We equipped BUDD-E with a camera to better understand and respond to the user's study behaviors, ensuring that the rewards and stats provided were in sync with real-time engagement. 4. **Candy Dispenser:** No reward system is complete without the rewards! Our candy dispenser is strategically integrated to give out those well-deserved treats after fruitful study sessions. 5. **Speaker:** To make the Q&A experience more interactive, we added a speaker. This allows BUDD-E to vocalize answers to any academic queries, making the study experience more engaging. 6. **The Brain - Raspberry Pi:** Orchestrating the movements, rewards, and interactions is the Raspberry Pi. This mini-computer takes care of processing, managing the camera feed, controlling the candy dispenser, and handling speaker outputs. All this while seamlessly integrating with the Viam Platform. 7. **Frontend Magic:** To offer a user-friendly interface, we developed a study page using React. This not only tracks study progress but also facilitates smooth communication between the student and "Study BUDD-E". It's sleek, intuitive, and keeps you connected with your robotic study companion. By amalgamating a strong robotic base with added functionalities and a dynamic front-end interface, we've aimed to make "Study BUDD-E" an essential part of every student's study routine. ## 🚧 Challenges we ran into Creating "Study BUDD-E" was an enlightening journey, but like every innovation story, it wasn’t without its hurdles. Here are some challenges that kept us on our toes: 1. **Connectivity Conundrums:** One of the main challenges was maintaining a stable connection between our systems and the robot. There were times when the connection was as elusive as the solution to a tricky math problem! Ensuring a consistent and robust link was crucial, as it affected the real-time feedback and control of the robot. After some deep troubleshooting and testing, we managed to create a more stable bridge of communication. 2. **Hardware Hurdles:** Securing all the desired hardware components wasn't a walk in the park. Due to various constraints, some components were out of reach, which led us to think on our feet. It was a masterclass in improvisation as we figured out alternative solutions that would still align with our vision for "Study BUDD-E". 3. **Balancing Acts:** Building a robot is part science and part art. One unexpected challenge was ensuring that "Study BUDD-E" remained stable and balanced, especially with all the new additions. There were a few tumbles and wobbles, but with some recalibrations and tweaks, we managed to get our BUDD-E to stand tall and steady. Each challenge presented a learning opportunity. They pushed us to refine our ideas, think outside the box, and come together as a team to bring our vision of the perfect study companion to life. ## 🏆 Accomplishments that we're proud of Building "Study BUDD-E" was no simple task, but looking back, there are several accomplishments that make us beam with pride: 1. **Resilient Connectivity:** Overcoming the connectivity challenge was a significant win. We not only managed to establish a stable connection between our systems and the robot but ensured that it remained consistent even under varying conditions. This means that students can rely on "Study BUDD-E" without worrying about sudden disruptions. 2. **Innovative Problem-Solving:** When faced with hardware shortages, instead of being deterred, we embraced the art of improvisation. Finding alternative solutions that aligned with our initial vision showcased our team's adaptability and innovation. The end result? A robot that, while slightly different from our first blueprint, embodies the essence of "Study BUDD-E" even better. 3. **Achieving Balance:** Literally and metaphorically! We not only solved the physical balancing issues of the robot but also struck a balance between user-friendly design, functionality, and entertainment. "Study BUDD-E" is stable, efficient, and fun—attributes that we believe will resonate with every student. 4. **Intuitive User Interface:** Our React-based study page is something we take immense pride in. It’s sleek, user-friendly, and bridges the gap between the student and the robot seamlessly. Seeing our users navigate it with ease and finding it genuinely beneficial makes all the coding hours worth it. 5. **Team Synergy:** Last but not least, the way our team collaborated, shared ideas, tackled challenges head-on, and remained committed to the goal is an accomplishment in itself. "Study BUDD-E" is a testament to our collective spirit, determination, and passion for making study sessions sweeter and smarter. In a nutshell, the journey of creating "Study BUDD-E" has been filled with challenges, but the accomplishments along the way have made it an unforgettable experience. ## 🎓 What we learned The process of creating "Study BUDD-E" has been as enlightening as a dense textbook, but way more fun! Here's a glimpse of our takeaways: 1. **The Importance of Resilience:** In the face of connectivity challenges and other unexpected hitches, we discovered the true value of resilience. Keeping the bigger picture in mind, tweaking, adjusting, and not being afraid to iterate were crucial lessons. Just as in studying, sometimes you have to revisit a problem multiple times before finding the right solution. 2. **Adaptability is Key:** When our ideal hardware was out of reach, we learned that sometimes the best solutions come from thinking on the fly. Embracing improvisation not only led to effective outcomes but also made us more versatile as innovators. 3. **Balancing Theory and Practicality:** Just like in academics, where theoretical knowledge needs practical application, building the "Study BUDD-E" taught us the significance of balancing design ideas with real-world functionality. It's one thing to imagine a feature, but another to make it work seamlessly in practice. 4. **Collaboration Overcomes Challenges:** Diverse perspectives lead to comprehensive solutions. By pooling our skills, sharing ideas, and being open to feedback, we were able to address challenges more holistically. The camaraderie we built is a reminder that teamwork amplifies results. 5. **User-Centric Design:** Through developing our React-based interface and integrating the robot's functionalities, we've come to appreciate the importance of a user-centric approach. Building something technically impressive is one thing, but ensuring it's intuitive and caters to the user's needs is what truly makes a product stand out. 6. **Continuous Learning:** Just as "Study BUDD-E" aims to enhance study sessions, the process of building it reiterated the essence of continuous learning. Whether it was diving deep into robotics, exploring new software nuances, or mastering the art of problem-solving, every step was a learning curve. In essence, the path to creating "Study BUDD-E" reinforced that the journey is as valuable as the destination, packed with insights, challenges, and growth at every turn. ## 🚀 What's next for Study BUDD-E The journey with "Study BUDD-E" has only just begun! While we're incredibly proud of what we've accomplished so far, the horizon is brimming with exciting possibilities: 1. **Version 2.0:** Building on the feedback and experiences from this prototype, we're gearing up for a more polished and refined "Study BUDD-E 2.0". This next iteration will not only enhance usability but will also showcase a sleeker design. 2. **From Hacky to High-Tech:** While hackathons are about innovation at lightning speed, which sometimes means opting for quick-fixes, our vision for "Study BUDD-E" is far grander. We aim to revisit every aspect we rushed or improvised, ensuring that the robot’s performance, durability, and user experience are top-notch. 3. **Enhanced Features:** As we refine, we're also looking to innovate! There might be new features in the pipeline that further boost the study experience. Whether it’s advanced analytics, broader question-answer capabilities, or even gamifying the study process, the possibilities are limitless. 4. **Community Engagement:** We believe in evolving with feedback. Engaging with students and users to understand their needs, preferences, and recommendations will play a significant role in shaping the next version of "Study BUDD-E". 5. **Scalability:** Once the refined version is out, we're also looking at potential scalability options. Can "Study BUDD-E" cater to group study sessions? Could it be used in libraries or study halls? We're excited to explore how our little robot can make a larger impact! In essence, the future for "Study BUDD-E" is all about evolution, enhancement, and expansion. We're committed to making study sessions not just rewarding, but also revolutionarily efficient and enjoyable.
winning
## Inspiration We are all software/game devs excited by new and unexplored game experiences. We originally came to PennApps thinking of building an Amazon shopping experience in VR, but eventaully pivoted to Project Em - a concept we all found mroe engaging. Our swtich was motivated by the same force that is driving us to create and improve Project Em - the desire to venture into unexplored territory, and combine technologies not often used together. ## What it does Project Em is a puzzle exploration game driven by Amazon's Alexa API - players control their character with the canonical keyboard and mouse controls, but cannot accomplish anything relevant in the game without talking to a mysterious, unknown benefactor who calls out at the beginning of the game. ## How we built it We used a combination of C++, Pyhon, and lots of shell scripting to create our project. The client-side game code runs on Unreal Engine 4, and is a combination of C++ classes and Blueprint (Epic's visual programming language) scripts. Those scripts and classes communicate an intermediary server running Python/Flask, which in turn communicates with the Alexa API. There were many challenges in communicating RESTfully out of a game engine (see below for more here), so the two-legged approach lent itself well to focusing on game logic as much as possible. Sacha and Akshay worked mostly on the Python, TCP socket, and REST communication platform, while Max and Trung worked mainly on the game, assets, and scripts. The biggest challenge we faced was networking. Unreal Engine doesn't naively support running a webserver inside a game, so we had to think outside of the box when it came to networked communication. The first major hurdle was to find a way to communicate from Alexa to Unreal - we needed to be able to communicate back the natural language parsing abilities of the Amazon API to the game. So, we created a complex system of runnable threads and sockets inside of UE4 to pipe in data (see challenges section for more info on the difficulties here). Next, we created a corresponding client socket creation mechanism on the intermediary Python server to connect into the game engine. Finally, we created a basic registration system where game clients can register their publicly exposed IPs and Ports to Python. The second step was to communicate between Alexa and Python. We utilitzed [Flask-Ask](https://flask-ask.readthedocs.io/en/latest/) to abstract away most of the communication difficulties,. Next, we used [VaRest](https://github.com/ufna/VaRest), a plugin for handing JSON inside of unreal, to communicate from the game directly to Alexa. The third and final step was to create a compelling and visually telling narrative for the player to follow. Though we can't describe too much of that in text, we'd love you to give the game a try :) ## Challenges we ran into The challenges we ran into divided roughly into three sections: * **Threading**: This was an obvious problem from the start. Game engines rely on a single main "UI" thread to be unblocked and free to process for the entirety of the game's life-cycle. Running a socket that blocks for input is a concept in direct conflict with that idiom. So, we dove into the FSocket documentation in UE4 (which, according to Trung, hasn't been touched since Unreal Tournament 2...) - needless to say it was difficult. The end solution was a combination of both FSocket and FRunnable that could block and certain steps in the socket process without interrupting the game's main thread. Lots of stuff like this happened: ``` while (StopTaskCounter.GetValue() == 0) { socket->HasPendingConnection(foo); while (!foo && StopTaskCounter.GetValue() == 0) { Sleep(1); socket->HasPendingConnection(foo); } // at this point there is a client waiting clientSocket = socket->Accept(TEXT("Connected to client.:")); if (clientSocket == NULL) continue; while (StopTaskCounter.GetValue() == 0) { Sleep(1); if (!clientSocket->HasPendingData(pendingDataSize)) continue; buf.Init(0, pendingDataSize); clientSocket->Recv(buf.GetData(), buf.Num(), bytesRead); if (bytesRead < 1) { UE_LOG(LogTemp, Error, TEXT("Socket did not receive enough data: %d"), bytesRead); return 1; } int32 command = (buf[0] - '0'); // call custom event with number here alexaEvent->Broadcast(command); clientSocket->Close(); break; // go back to wait state } } ``` Notice a few things here: we are constantly checking for a stop call from the main thread so we can terminate safely, we are sleeping to not block on Accept and Recv, and we are calling a custom event broadcast so that the actual game logic can run on the main thread when it needs to. The second point of contention in threading was the Python server. Flask doesn't natively support any kind of global-to-request variables. So, the canonical approach of opening a socket once and sending info through it over time would not work, regardless of how hard we tried. The solution, as you can see from the above C++ snippet, was to repeatedly open and close a socket to the game on each Alexa call. This ended up causing a TON of problems in debugging (see below for difficulties there) and lost us a bit of time. * **Network Protocols**: Of all things to deal with in terms of networks, we spent he largest amount of time solving the problems for which we had the least control. Two bad things happened: heroku rate limited us pretty early on with the most heavily used URLs (i.e. the Alexa responders). This prompted two possible solutions: migrate to DigitalOcean, or constantly remake Heroku dynos. We did both :). DigitalOcean proved to be more difficult than normal because the Alexa API only works with HTTPS addresses, and we didn't want to go through the hassle of using LetsEncrypt with Flask/Gunicorn/Nginx. Yikes. Switching heroku dynos it was. The other problem we had was with timeouts. Depending on how we scheduled socket commands relative to REST requests, we would occasionally time out on Alexa's end. This was easier to solve than the rate limiting. * **Level Design**: Our levels were carefully crafted to cater to the dual player relationship. Each room and lighting balance was tailored so that the player wouldn't feel totally lost, but at the same time, would need to rely heavily on Em for guidance and path planning. ## Accomplishments that we're proud of The single largest thing we've come together in solving has been the integration of standard web protocols into a game engine. Apart from matchmaking and data transmission between players (which are both handled internally by the engine), most HTTP based communication is undocumented or simply not implemented in engines. We are very proud of the solution we've come up with to accomplish true bidirectional communication, and can't wait to see it implemented in other projects. We see a lot of potential in other AAA games to use voice control as not only an additional input method for players, but a way to catalyze gameplay with a personal connection. On a more technical note, we are all so happy that... THE DAMN SOCKETS ACTUALLY WORK YO ## Future Plans We'd hope to incorporate the toolchain we've created for Project Em as a public GItHub repo and Unreal plugin for other game devs to use. We can't wait to see what other creative minds will come up with! ### Thanks Much <3 from all of us, Sacha (CIS '17), Akshay (CGGT '17), Trung (CGGT '17), and Max (ROBO '16). Find us on github and say hello anytime.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
# The Word on the Streets ## Installation ###### pip install -r requirements.txt ## Run ###### python Guardian\_API.py ###### Fill in the entire form, select Show and then click Next Sentiment Analysis of Twitter and Guardian News. Used Natural Language Processing (Vader Sentiment Analysis techniques) and Microsoft Cognitive Services to predict sentiment of topics from Twitter and Guardian News. Visualization of Data. Of interest in various fields in FinTech as flag indicators for Economic Catastrophies as a result of radical Political Change / Natural Disasters. It provides a quantitative estimate of the impact of Social, Political, Natural and Economic events. ## Results (Between 2016-06-01 and 2016-07-01) ### Guardian News: ![Alt text](https://github.com/ronakice/HackPrinceton/blob/master/news.png) ### Twitter: ![Alt text](https://github.com/ronakice/HackPrinceton/blob/master/twitter.png)
winning
## Inspiration Self-motivation is hard. It’s time for a social media platform that is meaningful and brings a sense of achievement instead of frustration. While various pro-exercise campaigns and apps have tried inspire people, it is difficult to stay motivated with so many other more comfortable distractions around us. Surge is a social media platform that helps solve this problem by empowering people to exercise. Users compete against themselves or new friends to unlock content that is important to them through physical activity. True friends and formed through adversity, and we believe that users will form more authentic, lasting relationships as they compete side-by-side in fitness challenges tailored to their ability levels. ## What it does When you register for Surge, you take an initial survey about your overall fitness, preferred exercises, and the websites you are most addicted to. This survey will serve as the starting point from which Surge creates your own personalized challenges: Run 1 mile to watch Netflix for example. Surge links to your phone or IOT wrist device (Fitbit, Apple Watch, etc...) and, using its own Chrome browser extension, 'releases' content that is important to the user when they complete the challenges. The platform is a 'mixed bag'. Sometimes users will unlock rewards such as vouchers or coupons, and sometimes they will need to complete the challenge to unlock their favorite streaming or gaming platforms. ## How we built it Back-end: We used Python Flask to run our webserver locally as we were familiar with it and it was easy to use it to communicate with our Chrome extension's Ajax. Our Chrome extension will check the URL of whatever webpage you are on against the URLs of sites for a given user. If the user has a URL locked, the Chrome extension will display their challenge instead of the original site at that URL. We used an ESP8266 (onboard Arduino) with an accelerometer in lieu of an IOT wrist device, as none of our team members own those devices. We don’t want an expensive wearable to be a barrier to our platform, so we might explore providing a low cost fitness tracker to our users as well. We chose to use Google's Firebase as our database for this project as it supports calls from many different endpoints. We integrated it with our Python and Arduino code and intended to integrate it with our Chrome extension, however we ran into trouble doing that, so we used AJAX to send a request to our Flask server which then acts as a middleman between the Firebase database and our Chrome extension. Front-end: We used Figma to prototype our layout, and then converted to a mix of HTML/CSS and React.js. ## Challenges we ran into Connecting all the moving parts: the IOT device to the database to the flask server to both the chrome extension and the app front end. ## Accomplishments that we're proud of Please see above :) ## What we learned Working with firebase and chrome extensions. ## What's next for SURGE Continue to improve our front end. Incorporate analytics to accurately identify the type of physical activity the user is doing. We would also eventually like to include analytics that gauge how easily a person is completing a task, to ensure the fitness level that they have been assigned is accurate.
## Inspiration Being sport and fitness buffs, we understand the importance of right form. Incidentally, suffering from a wrist injury himself, Mayank thought of this idea while in a gym where he could see almost everyone following wrong form for a wide variety of exercises. He knew that it couldn't be impossible to make something that easily accessible yet accurate in recognizing wrong exercise form and most of all, be free. He was sick of watching YouTube videos and just trying to emulate the guys in it with no real guidance. That's when the idea for Fi(t)nesse was born, and luckily, he met an equally health passionate group of people at PennApps which led to this hack, an entirely functional prototype that provides real-time feedback on pushup form. It also lays down an API that allows expansion to a whole array of exercises or even sports movements. ## What it does A user is recorded doing the push-up twice, from two different angles. Any phone with a camera can fulfill this task. The data is then analyzed and within a minute, the user has access to detailed feedback pertaining to the 4 most common push-up mistakes. The application uses custom algorithms to detect these mistakes and also their extent and uses this information to provide a custom numerical score to the user for each category. ## How we built it Human Pose detection with a simple camera was achieved with OpenCV and deep neural nets. We tried using both the COCO and the MPI datasets for training data and ultimately went with COCO. We then setup an Apache server running Flask using the Google Computing Engine to serve as an endpoint for the input videos. Due to lack of access to GPUs, a 24 core machine on the Google Cloud Platform was used to run the neural nets and generate pose estimations. The Fi(t)nesse website was coded in HTML+CSS while all the backend was written in Python. ## Challenges we ran into Getting the Pose Detection right and consistent was a huge challenge. After a lot of tries, we ended and a model that works surprisingly accurately. Combating the computing power requirements of a large neural network was also a big challenge. We were initially planning to do the entire project on our local machines but when they kept slowing down to a crawl, we decided to shift everything to a VM. The algorithms to detect form mistakes and generate scores for them were also a challenge since we could find no mathematical information about the right form for push-ups, or any of the other popular exercises for that matter. We had to come up with the algorithms and tweak them ourselves which meant we had to do a LOT of pushups. But to our pleasant surprise, the application worked better than we expected. Getting a reliable data pipeline setup was also a challenge since everyone on our team was new to deployed systems. A lot of hours and countless tutorials later, even though we couldn't reach exactly the level of integration we were hoping for, we were able to create something fairly streamlined. Every hour of the struggle taught us new things though so it was all worth it. ## Accomplishments that we're proud of -- Achieving accurate single body human pose detection with support for multiple bodies as well from a simple camera feed. -- Detecting the right frames to analyze from the video since running every frame through our processing pipeline was too resource intensive -- Developing algorithms that can detect the most common push-up mistakes. -- Deploying a functioning app ## What we learned Almost every part of this project involved a massive amount of learning for all of us. Right from deep neural networks to using huge datasets like COCO and MPI, to learning how deployed app systems work and learning the ins and outs of the Google Cloud Service. ## What's next for Fi(t)nesse There is an immense amount of expandability to this project. Adding more exercises/movements is definitely an obvious next step. Also interesting to consider is the 'gameability' of an app like this. By giving you a score and sassy feedback on your exercises, it has the potential to turn exercise into a fun activity where people want to exercise not just with higher weights but also with just as good form. We also see this as being able to be turned into a full-fledged phone app with the right optimizations done to the neural nets.
## Inspiration **Read something, do something.** We constantly encounter articles about social and political problems affecting communities all over the world – mass incarceration, the climate emergency, attacks on women's reproductive rights, and countless others. Many people are concerned or outraged by reading about these problems, but don't know how to directly take action to reduce harm and fight for systemic change. **We want to connect users to events, organizations, and communities so they may take action on the issues they care about, based on the articles they are viewing** ## What it does The Act Now Chrome extension analyzes articles the user is reading. If the article is relevant to a social or political issue or cause, it will display a banner linking the user to an opportunity to directly take action or connect with an organization working to address the issue. For example, someone reading an article about the climate crisis might be prompted with a link to information about the Sunrise Movement's efforts to fight for political action to address the emergency. Someone reading about laws restricting women's reproductive rights might be linked to opportunities to volunteer for Planned Parenthood. ## How we built it We built the Chrome extension by using a background.js and content.js file to dynamically render a ReactJS app onto any webpage the API identified to contain topics of interest. We built the REST API back end in Python using Django and Django REST Framework. Our API is hosted on Heroku, the chrome app is published in "developer mode" on the chrome app store and consumes this API. We used bitbucket to collaborate with one another and held meetings every 2 - 3 hours to reconvene and engage in discourse about progress or challenges we encountered to keep our team productive. ## Challenges we ran into Our initial attempts to use sophisticated NLP methods to measure the relevance of an article to a given organization or opportunity for action were not very successful. A simpler method based on keywords turned out to be much more accurate. Passing messages to the back-end REST API from the Chrome extension was somewhat tedious as well, especially because the API had to be consumed before the react app was initialized. This resulted in the use of chrome's messaging system and numerous javascript promises. ## Accomplishments that we're proud of In just one weekend, **we've prototyped a versatile platform that could help motivate and connect thousands of people to take action toward positive social change**. We hope that by connecting people to relevant communities and organizations, based off their viewing of various social topics, that the anxiety, outrage or even mere preoccupation cultivated by such readings may manifest into productive action and encourage people to be better allies and advocates for communities experiencing harm and oppression. ## What we learned Although some of us had basic experience with Django and building simple Chrome extensions, this project provided new challenges and technologies to learn for all of us. Integrating the Django backend with Heroku and the ReactJS frontend was challenging, along with writing a versatile web scraper to extract article content from any site. ## What's next for Act Now We plan to create a web interface where organizations and communities can post events, meetups, and actions to our database so that they may be suggested to Act Now users. This update will not only make our applicaiton more dynamic but will further stimulate connection by introducing a completely new group of people to the application: the event hosters. This update would also include spatial and temporal information thus making it easier for users to connect with local organizations and communities.
winning
Beverly is a dynamic, real-time sentiment analysis tool that links the Twitter world's emotions with current events. Any keyword can be analyzed and visualized upon a map of the United States to gain insight into how the population is feeling. ## Built With * [stdlib](https://stdlib.com/) - The 'serverless' web functions we built * [Vue](https://vuejs.org/) - Front-end framework * [Bing News Search API](https://azure.microsoft.com/en-us/services/cognitive-services/bing-news-search-api/) - Finding relevant current events * [Azure Text Analytics API](https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/) - Sentiment analysis of all text * [Twitter API](https://developer.twitter.com/en/docs) - Gathering relevant tweets from the US * [Bulma](https://bulma.io/) - CSS styling framework ## Authors * **Andrew McCann** - *Back-end development and ideation* * **Justin Koh** - *Front-end development and design* * **Lincoln Berkley** - *Back-end development and API hacker* * **Victor Shi** - *Front-end development and middleware connection* ## Acknowledgments * Debugging and support from Jacob at stdlib * Microsoft for tossing us an extra $50 Azure cred
## Inspiration As teammates, we're doing off-season internships, and one thing we agreed on is that it's **sooooo** tough to both break the ice and find engaging activities during team meetings. And what we got out of that was......**27 hours of planning**, \*\*96 different ideas from ChatGPT (*yeah, even this failed*), and one, incredibly fun all-nighter. Inspired by Pictogram, Hangman, and even tools like Figma and Excalidraw, we knew something fun and interactive that could be a go-to for “water-cooler” moments would be awesome to try out during a hackathon. We also saw the need for this in our school clubs, so we decided to build a game that brings people together with a fun and competitive edge! ## What it does Skribbl.ai is a competitive drawing game where two players compete to replicate an image given through a prompt, all on a shared virtual whiteboard. Elevated with real-time video, voice, and chat, players can communicate and collaborate while racing against time to impress the AI judge, which scores the drawings based on accuracy. ## How we built it We used several key technologies: * **100ms** for real-time video and voice features. **tribute to [Nexus](https://devpost.com/software/nexus-27zakp)** * **ChromaDB** to handle data storage for user interactions. * **tldraw** and **tldraw sync** for real-time collaborative white-boarding primitives. * **React** and **TypeScript** to power the frontend. * **NextAuth** for user authentication and session management. Despite starting at 9 pm on the Saturday before the hackathon's end, we managed to pivot from our original idea (voice-powered music production) and complete this project in record time. ## Challenges we ran into * Our original idea of voice-powered music production wasn't compatible with the sponsor *Hume's* technology, forcing us to pivot. * The tight deadline, constant pivoting, and **beyond** late start added additional pressure, but we powered through to deliver a fully functional app by the end of the hackathon. * The Metreon WiFi, especially when building a network-heavy application, lead to many hotspots and remote work. ## Accomplishments that we're proud of * We’re incredibly proud of how quickly we pivoted and built a polished app with video, voice, chat, and whiteboard integration in a matter of hours. Finishing the project under such time constraints felt like a huge accomplishment. Yeah, we may not have the crazy large feature set, be we do the one thing we planned to do, really well–at least, we think. ## What we learned * We learned how to adapt quickly when things don’t go as planned, and we gained valuable experience integrating real-time video and collaboration features with technologies like **100ms**, **ChromaDB**, and **tldraw**. * We also experience in perseverance and pushing through idea droughts. Since we're working, adjusting "back" to the hackathon mindset definitely takes time. ## What's next for Skribbl.ai We’re super stoked to continue improving Skribbl.ai after CalHacks. We surprisingly—especially given our execution, see potential for the app to be used in virtual team-building exercises, school clubs, and social hangouts. Stuff like: * **Multiplayer modes**: Expand to support larger groups and team-based drawing challenges. * **Advanced AI judging**: Improve the AI to evaluate drawings based on creativity, style, and time taken, not just accuracy. * **Custom game modes**: Allow users to create custom challenges, themes, and rules for personalized gameplay. * **Leaderboard and achievements**: Introduce a ranking system, badges, and awards for top players. * **Mobile app**: Develop a mobile-friendly version to make the game accessible across different devices. * **Interactive spectators**: Let spectators participate in the game through voting or live commenting during matches. * **Real-time drawing hints**: Implement features where players can give or receive subtle hints during gameplay without breaking the challenge. * **Custom avatars and themes**: Offer players options to personalize their in-game experience with unique avatars, themes, and board designs. All this stuff seems super exciting to build, and we're glad to have a baseline to expand off of. Well, that's it for skribbl.ai, thanks for reading! **Note for GoDaddy:** The promo-code we tried to apply `MLHCAL24` was not working on the website. We tried the second best thing, in Vercel. ; )
## Inspiration COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse. Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general. ## What it does In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed. After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion **Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.** **Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.** ## How we built it Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS. ## Challenges we ran into Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it. ## Accomplishments that we're proud of We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility. And our two last minute accessibility add-ons! ## What we learned It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective. Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees. ## What's next for empath.ly From short-term to long-term stretch goals: * We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience. * We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions. * We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user.
losing
## 💡 Inspiration The objective of our application is to devise an effective and efficient written transmission optimization scheme, by converting esoteric text into an exoteric format. If you read the above sentence more than once and the word ‘huh?’ came to mind, then you got my point. Jargon causes a problem when you are talking to someone who doesn't understand it. Yet, we face obscure, vague texts every day - from ['text speak'](https://www.goodnewsnetwork.org/dad-admits-hilarious-texting-blunder-on-the-moth/) to T&C agreements. The most notoriously difficult to understand texts are legal documents, such as contracts or deeds. However, making legal language more straightforward would help people understand their rights better, be less susceptible to being punished or not being able to benefit from their entitled rights. Introducing simpl.ai - A website application that uses NLP and Artificial Intelligence to recognize difficult to understand text and rephrase them with easy-to-understand language! ## 🔍 What it does simpl.ai intelligently simplifies difficult text for faster comprehension. Users can send a PDF file of the document they are struggling to understand. They can select the exact sentences that are hard to read, and our NLP-model recognizes what elements make it tough. You'll love simpl.ai's clear, straightforward restatements - they change to match the original word or phrase's part of speech/verb tense/form, so they make sense! ## ⚙️ Our Tech Stack [![Tech-Diagram-drawio.png](https://i.postimg.cc/1RprSfYf/Tech-Diagram-drawio.png)](https://postimg.cc/gr2ZqkpW) **Frontend:** We created the client side of our web app using React.js and JSX based on a high-fidelity prototype we created using Figma. Our components are styled using MaterialUI Library, and Intelllex's react-pdf package for rendering PDF documents within the app. **Backend:** Python! The magic behind the scenes is powered by a combination of fastAPI, TensorFlow (TF), Torch and Cohere. Although we are newbies to the world of AI (NLP), we used a BART model and TF to create a working model that detects difficult-to-understand text! We used the following [dataset](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/complex-word-identification-dataset/cwishareddataset.zip) from Stanford University to train our [model](http://nlp.stanford.edu/data/glove.6B.zip)- It's based on several interviews conducted with non-native English speakers, where they were tasked to identify difficult words and simpler synonyms for them. Finally, we used Cohere to rephrase the sentence and ensure it makes sense! ## 🚧 Challenges we ran into This hackathon was filled with many challenges - but here are some of the most notable ones: * We purposely choose an AI area where we didn't know too much in (NLP, TensorFlow, CohereAPI), which was a challenging and humbling experience. We faced several compatibility issues with TensorFlow when trying to deploy the server. We decided to go with AWS Platform after a couple of hours of trying to figure out Kubernetes 😅 * Finding a dataset that suited our needs! If there were no time constraints, we would have loved to develop a dataset that is more focused on addressing tacky legal and technical language. Since that was not the case, we made do with a database that enabled us to produce a proof-of-concept. ## ✔️ Accomplishments that we're proud of * Creating a fully-functioning app with bi-directional communication between the AI server and the client. * Working with NLP, despite having no prior experience or knowledge. The learning curve was immense! * Able to come together as a team and move forward, despite all the challenges we faced together! ## 📚 What we learned We learned so much in terms of the technical areas; using machine learning and having to pivot from one software to the other, state management and PDF rendering in React. ## 🔭 What's next for simpl.ai! **1. Support Multilingual Documents.** The ability to translate documents and provide a simplified version in their desired language. We would use [IBM Watson's Language Translator API](https://cloud.ibm.com/apidocs/language-translator?code=node) **2. URL Parameter** Currently, we are able to simplify text from a PDF, but we would like to be able to do the same for websites. * Simplify legal jargon in T&C agreements to better understand what permissions and rights they are giving an application! * We hope to extend this service as a Chrome Extension for easier access to the users. **3. Relevant Datasets** We would like to expand our current model's capabilities to better understand legal jargon, technical documentation etc. by feeding it keywords in these areas.
## Overview We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses. ## Inspiration Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out! ## What it does SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with. ## How we built it The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame. ## Challenges we ran into Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour. ## Accomplishments that we're proud of We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency. ## What we learned We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees) ## What's next for SmartEQ We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions. In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
## Inspiration We were motivated to tackle linguistic challenges in the educational sector after juxtaposing our personal experience with current news. There are currently over 70 million asylum seekers, refugees, or internally-displaced people around the globe, and this statistic highlights the problem of individuals from different linguistic backgrounds being forced to assimilate into a culture and language different than theirs. As one of our teammates was an individual seeking a new home in a new country, we had first hand perspective at how difficult this transition was. In addition, our other team members had volunteered extensively within the educational system in developing communities, both locally and globally, and saw a similar need with individuals being unable to meet the community’s linguistics standards. We also iterated upon our idea to ensure that we are holistically supporting our communities by making sure we consider the financial implications of taking the time to refine your language skills instead of working. ## What it does Fluently’s main purpose is to provide equitable education worldwide. By providing a user customized curriculum and linguistic practice, students can further develop their understanding of their language. It can help students focus on areas where they need the most improvement. This can help them make progress at their own pace and feel more confident in their language skills while also practicing comprehension skills. By using artificial intelligence to analyze pronunciation, our site provides feedback that is both personalized and objective. ## How we built it Developing the web application was no easy feat. As we were searching for an AI model to help us through our journey we stumbled upon OpenAI, specifically Microsoft Azure’s cognitive systems that utilize OpenAI’s comprehensive abilities in language processing. This API gave us the ability to analyze voice patterns and fluency and transcribe passages that are mentioned in the application. Figuring out the documentation as well as how the AI will be interacting with the user was most important for us to execute properly since the AI would be acting as the tutor/mentor for the students in these cases. We developed a diagram that would break down the passages read to the student phonetically and give them a score of 100 for how well each word was pronounced based on the API’s internal grading system. As it is our first iteration of the web app, we wanted to explore how much information we could extract from the user to see what is most valuable to display to them in the future. Integrating the API with the web host was a new feat for us as a young team. We were confident in our python abilities to host the AI services and found a library by the name of Flask that would help us write html and javascript code to help support the front end of the application through python. By using Flask, we were able to host our AI services with python while also continuously managing our front end through python scripts. This gave room for the development of our backend systems which are Convex and Auth0. Auth0 was utilized to give members coming into the application a unique experience by having them sign into a personalized account. The account is then sent into the Convex database to be used as a storage base for their progress in learning and their development of skills over time. All in all, each component of the application from the AI learning models, generating custom passages for the user, to the backend that communicated between the Javascript and Python server host that streamlines the process of storing user data, came with its own challenges but came together seamlessly as we guide the user from our simple login system to the passage generator and speech analyzer to give the audience constructive feedback on their fluency and pronunciation. ## Challenges we ran into As a majority beginning team, this was our first time working with many of the different technologies, especially with AI APIs. We need to be patient working with key codes and going through an experiment process of trying different mini tests out to then head to the major goal that we were headed towards. One major issue that we faced was the visualization of data to the user. We found it hard to synthesize the analysis that was done by the AI to translate to the user to make sure they are confident in what they need to improve on. To solve this problem we first sought out how much information we could extract from the AI and then in future iterations we would simply display the output of feedback. Another issue we ran into was the application of convex into the application. The major difficulty came from developing javascript functions that would communicate back to the python server hosting the site. This was resolved thankfully; we are grateful for the Convex mentors at the conference that helped us develop personalized javascript functions that work seamlessly with our Auth0 authentication and the rest of the application to record users that come and go. ## Accomplishments that we're proud of: One accomplishment that we are proud of was the implementation of Convex and Auth0 with Flask and Python. As python is a rare language to host web servers in and isn't the primary target language for either service, we managed to piece together a way to fit both services into our project by collaboration with the team at Convex to help us out. This gave way to a strong authentication platform for our web application and for helping us start a database to store user data onto. Another accomplishment was the transition of using a React Native application to using Flask with Python. As none of the group has seen Flask before or worked for it for that matter, we really had to hone in our abilities to learn on the fly and apply what we knew prior about python to make the web app work with this system. Additionally, we take pride in our work with OpenAI, specifically Azure. We researched our roadblocks in finding a voice recognition AI to implement our natural language processing vision. We are proud of how we were able to display resilience and conviction to our overall mission for education to use new technology to build a better tool. ## What we learned As beginners at our first hackathon, not only did we learn about the technical side of building a project, we were also able to hone our teamwork skills as we dove headfirst into a project with individuals we had never worked with before. As a group, we collectively learned about every aspect of coding a project, from refining our terminal skills to working with unique technology like Microsoft Azure Cognitive Services. We also were able to better our skillset with new cutting edge technologies like Convex and OpenAI. We were able to come out of this experience not only growing as programmers but also as individuals who are confident they can take on the real world challenges of today to build a better tomorrow. ## What's next? We hope to continue to build out the natural language processing applications to offer the technology in other languages. In addition, we hope to hone to integrate other educational resources, such as videos or quizzes to continue to build other linguistic and reading skill sets. We would also love to explore the cross section with gaming and natural language processing to see if we can make it a more engaging experience for the user. In addition, we hope to expand the ethical considerations by building a donation platform that allows users to donate money to the developing community and pay forward the generosity to ensure that others are able to benefit from refining their linguistic abilities. The money would then go to a prominent community in need that uses our platform to fund further educational resources in their community. ## Bibliography United Nations High Commissioner for Refugees. “Global Forced Displacement Tops 70 Million.” UNHCR, UNHCR, The UN Refugee Agency, <https://www.unhcr.org/en-us/news/stories/2019/6/5d08b6614/global-forced-displacement-tops-70-million.html>.
winning
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
## Inspiration How many times have you forgotten to take your medication and damned yourself for it? It has happened to us all, with different consequences. Indeed, missing out on a single pill can, for some of us, throw an entire treatment process out the window. Being able to keep track of our prescriptions is key in healthcare, this is why we decided to create PillsOnTime. ## What it does PillsOnTime allows you to load your prescription information, including the daily dosage and refills, as well as reminders in your local phone calendar, simply by taking a quick photo or uploading one from the library. The app takes care of the rest! ## How we built it We built the app with react native and expo using firebase for authentication. We used the built in expo module to access the devices camera and store the image locally. We then used the Google Cloud Vision API to extract the text from the photo. We used this data to then create a (semi-accurate) algorithm which can identify key information about the prescription/medication to be added to your calendar. Finally the event was added to the phones calendar with the built in expo module. ## Challenges we ran into As our team has a diverse array of experiences, the same can be said about the challenges that each encountered. Some had to get accustomed to new platforms in order to design an application in less than a day, while figuring out how to build an algorithm that will efficiently analyze data from prescription labels. None of us had worked with machine learning before and it took a while for us to process the incredibly large amount of data that the API gives back to you. Also working with the permissions of writing to someones calendar was also time consuming. ## Accomplishments that we're proud of Just going into this challenge, we faced a lot of problems that we managed to overcome, whether it was getting used to unfamiliar platforms or figuring out the design of our app. We ended up with a result rather satisfying given the time constraints & we learned quite a lot. ## What we learned None of us had worked with ML before but we all realized that it isn't as hard as we thought!! We will definitely be exploring more similar API's that google has to offer. ## What's next for PillsOnTime We would like to refine the algorithm to create calendar events with more accuracy
## See our pitch for the app here: [Pitch](https://pitch.com/v/audiodine_pitch_deck-4rww5h)! ## 💡 Inspiration Imagine you're heading home after a long, exhausting day. You’re hungry, craving something quick and easy. A fast food drive-thru sounds perfect. You pull up, but the speaker crackles. The employee struggles to hear your order over the noise of traffic and the sound of the fryer. You repeat yourself, growing more frustrated. When you finally pull around to the window, you find the order is wrong—again. What should’ve been a quick and convenient stop becomes an experience full of delays and confusion. This isn't just an occasional hiccup; it’s a systemic issue fast-food chains face every day. McDonald's recently tried automating their drive-thru system with IBM to tackle this, but the results were far from perfect. The very technology that was supposed to speed things up became a roadblock. So, when we heard about the partnership falling apart, we knew there was an opportunity to do better—to reimagine the entire experience. And that’s how Autodine was born. We didn’t want to simply fix the problem. We wanted to elevate the drive-thru experience into something truly futuristic, part of a bigger vision for the smart cities of tomorrow. Picture this: a world where ordering at a drive-thru is not just efficient, but seamless and even enjoyable. No more misheard orders, no more repeating yourself over a crackling speaker. Instead, you enter a digital space where you can see your order in real-time, communicated clearly and accurately. Whether it’s background noise, heavy traffic, or the busiest lunch rush, Autodine handles it all with ease. Our goal is simple but powerful: to rebuild trust in fast food service by tackling the inefficiencies that frustrate customers and hurt businesses. Through Autodine, we're offering a smarter, more reliable way to handle orders that enhances the customer experience while reducing operational stress for the chains. ## ❓ What it does **Welcome to the future**—a future where cities are smarter, services are seamless, and technology helps us, not hinders us. Imagine stepping into a world where everything just works, where you no longer have to worry about miscommunication, long waits, or order mix-ups at a drive-thru. This is the world **Autodine** offers—a glimpse into the smart cities of tomorrow, where ordering food is as effortless as it should be. At Autodine, we **bring you into our vision** of a smart city. Using the latest cutting-edge technology, including **Unreal Engine 5** for immersive visualizations, we allow you to see what ordering food in the future will look like. Gone are the days of shouting your order through a crackling speaker. Instead, you enter a sleek, virtual space where you **interact directly with our smart systems** and **drive-thru agents** in real time, viewing your order with clarity. Our app isn’t just about making orders easier—it’s about **bringing you into the future** of urban living. Through Auto Dine, you experience how **automated systems** will transform smart cities, making services faster, smoother, and more reliable. We’re not just offering a solution to current problems—we’re **painting a picture** of how **technology will work for us** in tomorrow’s cities. ## 🛠️ How we built it At its core, our solution has three critical components, working together seamlessly: 1. **AI-Powered Large Language Model (LLM)** We designed an almost real-time system where text-to-speech conversion allows drivers to place their orders naturally and quickly. Using clever prompts and API calls to **OpenAI** and the restaurant’s web server, the system processes each order with precision, ensuring that there’s no room for error. It creates a fluid interaction between the customer and the ordering system, making communication fast and effective. 2. **Web Server Built on Django** To manage the restaurant's operations, we built a robust **Django**-based web server, modeling a database that reflects real-time restaurant activity. This server handles everything from creating orders to updating the database with incoming information, providing a structured backbone for AutoDine. It’s the beating heart of the system, ensuring that everything runs smoothly behind the scenes. 3. **Dynamic Front-End Dashboard** The customer-facing side is a beautiful, interactive dashboard that reflects real-time updates from the web server. Using **Server-Sent Events (SSE)**, the web server continuously feeds the front end with the latest status of orders, so customers always know what’s happening with their food. It’s an interface designed for clarity and simplicity, making sure users have a seamless and engaging experience. Each of these components plays a vital role in delivering a futuristic solution to an age-old problem—improving the drive-thru experience. But AutoDine goes beyond just food orders: it demonstrates the power of automation, AI, and real-time systems to shape the smart cities of tomorrow. Autodine isn’t just solving today’s problems—it’s paving the way for a smarter, more connected future. ## 🧗‍♂️ Challenges we ran into 1) Getting the real-time text to speech was challenging because we were trying to achieve the most human-like conversation possible, so figuring out when to stop recording and process the audio asynchronously was challenging. We were fortunate enough to find the RealtimeSTT library that uses different techniques to achieve almost real-time speech-to-text. 2) Having both a speech-to-text and text-to-speech operating simultaneously, which both access the operating system and interfere destructively sometimes. 3) The prompt engineering turned out to be a little harder than expected. We used an LLM to parse the user input into a particular format and also interact with the web server, but it wouldn't always follow the format. So we had to chain prompts and insert system messages in-between user messages. ## 🏆 Accomplishments that we're proud of We are proud of our execution speed. We came up with this idea, then checked the internet to see whether people had done it before, and apparently McDonald and IBM failed at it. But the uncertainty of whether it would even work didn't stop us from trying, and in two days, voilà. We are proud of the scalability of our system; for example, we could use it at hospital. Why? Our product is NOT an LLM, i.e., a text generator (though it is crucial for parsing information). Our product is a system that can interact with people and push updates to other services. So we can imagine a device in a consultation room that listens to diagnoses and conversations, or even a video camera that watches people and regularly updates information of interest. ## 📚 What we learned We learned that though accredited professionals may take months to develop products and fail sometimes, it doesn't mean that a group of students that don't even know each other are guaranteed to fail. Is that not AWESOME?! ## 🚀 What's next for Autodine 1. **User Feedback and Iteration**: * Collect feedback from initial users to identify pain points and areas for improvement. * Iterate on the app's design and functionality based on user insights to enhance user experience. 2. **Expand Use Cases**: * Explore additional industries beyond fast food, such as healthcare, retail, and hospitality. * Develop tailored features for each industry to meet specific needs and challenges. 3. **Integration with More Services**: * Integrate with various payment systems and loyalty programs to streamline transactions. * Consider partnerships with major food delivery platforms to expand reach. 4. **Enhancing AI Capabilities**: * Improve the large language model (LLM) for better natural language understanding and response generation. * Implement machine learning algorithms to predict user preferences and personalize the ordering experience. 5. **Mobile App Development**: * Create a dedicated mobile app to enhance accessibility and user engagement. * Incorporate features like geolocation to guide users to the nearest participating locations. 6. **Marketing and Awareness**: * Launch targeted marketing campaigns to raise awareness about Autodine’s benefits and features. * Engage in community outreach to demonstrate the app’s capabilities at local events or food festivals. 7. **Partnerships and Collaborations**: * Seek partnerships with restaurants and fast-food chains to pilot Autodine in real-world environments. * Collaborate with tech companies to leverage advanced technologies, such as computer vision for order verification. 8. **Focus on Data Security and Privacy**: * Ensure robust security measures are in place to protect user data and comply with privacy regulations. * Educate users about data security features to build trust. 9. **Scalability Enhancements**: * Optimize the backend infrastructure to support increased user load and data processing as the app scales. * Plan for geographical expansion, targeting regions with high drive-thru usage. 10. **Long-Term Vision**: * Continue exploring the intersection of automation and customer service to innovate beyond the initial concept. * Envision Autodine as part of a broader smart city ecosystem, contributing to seamless urban living.
winning
Team channel #43 Team discord users - Sarim Zia #0673, Elly #2476, (ASK), rusticolus #4817, Names - Vamiq, Elly, Sarim, Shahbaaz ## Inspiration When brainstorming an idea, we concentrated on problems that affected a large population and that mattered to us. Topics such as homelessness, food waste and a clean environment came up while in discussion. FULLER was able to incorporate all our ideas and ended up being a multifaceted solution that was able to help support the community. ## What it does FULLER connects charities and shelters to local restaurants with uneaten food and unused groceries. As food prices begin to increase along with homelessness and unemployment we decided to create FULLER. Our website serves as a communication platform between both parties. A scheduled pick-up time is inputted by restaurants and charities are able to easily access a listing of restaurants with available food or groceries for contactless pick-up later in the week. ## How we built it We used React.js to create our website, coding in HTML, CSS, JS, mongodb, bcrypt, node.js, mern, express.js . We also used a backend database. ## Challenges we ran into A challenge that we ran into was communication of the how the code was organized. This led to setbacks as we had to fix up the code which sometimes required us to rewrite lines. ## Accomplishments that we're proud of We are proud that we were able to finish the website. Half our team had no prior experience with HTML, CSS or React, despite this, we were able to create a fair outline of our website. We are also proud that we were able to come up with a viable solution to help out our community that is potentially implementable. ## What we learned We learned that when collaborating on a project it is important to communicate, more specifically on how the code is organized. As previously mentioned we had trouble editing and running the code which caused major setbacks. In addition to this, two team members were able to learn HTML, CSS, and JS over the weekend. ## What's next for us We would want to create more pages in the website to have it fully functional as well as clean up the Front-end of our project. Moreover, we would also like to look into how to implement the project to help out those in need in our community.
## Inspiration While looking for genuine problems that we could solve, it came to our attention that recycling is actually much harder than it should be. For example, when you go to a place like Starbucks and are presented with the options of composting, recycling, or throwing away your empty coffee, it can be confusing and for many people, it can lead to selecting the wrong option. ## What it does Ecolens uses a cloud-based machine learning webstream to scan for an item and tells the user the category of item it is that they scanned, providing them with a short description of the object and updating their overall count of consuming recyclable vs. unrecyclable items as well as updating the number of items that they consumed in that specific category (i.e. number of water bottle consumed) ## How we built it This project consists of both a front end and a back end. The backend of this project was created using Java Spring and Javascript. Javascript was used in the backend in order to utilize Roboflow and Ultralytics which allowed us to display the visuals from Roboflow on the website for the user to see. Java Spring was used in the backend for creating a database that consisted of all of the scanned items and tracked them as they were altered (i.e. another item was scanned or the user decided to dump the data). The front end of this project was built entirely through HTML, CSS, and Javascript. HTML and CSS were used in the front end to display text in a format specific to the User Interface, and Javascript was used in order to implement the functions (buttons) displayed in the User Interface. ## Challenges we ran into This project was particularly difficult for all of us because of the fact that most of our team consists of beginners and there were multiple parts during the implementation of our application that no one was truly comfortable with. For example, integrating camera support into our website was particularly difficult as none of our members had experience with JavaScript, and none of us had fully fledged web development experience. Another notable challenge was presented with the backend of our project when attempting to delete the user history of items used while also simultaneously adding them to a larger “trash can” like a database. From a non-technical perspective, our group also struggled to come to an agreeance on how to make our implementation truly useful and practical. Originally we thought to have hardware that would physically sort the items but we concluded that this was out of our skill range and also potentially less sustainable than simply telling the user what to do with their item digitally. ## Accomplishments that we're proud of Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing. ## What we learned Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing. ## What's next for Eco Lens The most effective next course of action for EcoLens is to assess if there really is a demand for this product and what people think about it. Would most people genuinely use this if it was fully shipped? Answering these questions would provide us with grounds to move forward with our project.
## Inspiration Our team wanted to build an application with social impact. We believe that change starts within a community, therefore, we brainstormed ways that people can make a change within the local community. We realized that homelessness and food insecurity is a large issue within the Hamilton community, and we thought of ways that people could help reduce this problem. One of our members recalled videos and documentaries online showing supermarkets and restaurants throwing out perfectly good food. Sometimes it would be food which they prepared by mistake, food close to its expiration date, or simply just food with damaged packaging. 1,098+ people in hamilton registered as ‘experiencing homelessness and accessing services’ 3000+ people per day experiencing food insecurity per day in hamilton 77% of those experiencing homelessness have smartphones ## What it does We came up with a creative solution, where local businesses (i.e. supermarkets, restaurants, etc) can post perfectly good food or beverages that are still good for consumption, and that would otherwise be thrown out. Those is the local community who need it can then find these places and pick up items. As a result, FoodCycle provides a green solution to reducing food waste while helping those within the local community. ## How I built it * ReactJS * Javascript * HTML * Node.js * Google maps API ## Challenges I ran into We ran into challenges with some of the Reactjs libraries and with styling of some components. However, we were able to overcome many these problems by finding creative alternatives and solutions. ## Accomplishments that I'm proud of We are proud of our application and the idea. We are happy to create a hack that encourages social change and environmental sustainability. ## What I learned Google maps API and ReactJS (many of our members are inexperienced in ReactJS). In fact, this is one of our members' first time coding and hacking - needless to say, he was able to learn a lot! ## What's next for FoodCycle We would like to improve this application by adding more advanced features, such as the ability to find providers in our network by searching a specific food. We would also like to implement a more friendly dashboard for users, and possibly create a native mobile application. Ideally, we would like those in the local community to participate in actively reducing food waste and helping those in need.
partial
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40> ## Inspiration During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods. Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take. ## What it does Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing. ## How we built it Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application. ## Challenges we ran into The consistent challenge that we seemed to run into was the integration between our parts. Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding. ## Accomplishments that we're proud of During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end. Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative. ## What we learned Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies. ## What's next for Medisafe Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
## Inspiration Imagine you broke your EpiPen but you need it immediately for an allergic reaction. Imagine being lost in the forest with cut wounds and bleeding from a fall but have no first aid kit. How will you take care of your health without nearby hospitals or pharmacies? Well good thing for you, we have **MediFly**!! MediFly is inspired by how emergency vehicles such as ambulances take too long to get to the person in need of aid because of other cars on the road and traffic. Every second spent waiting is risking someone's life. So in order to combat that issue, we use **drones** as the first emergency responders to send medicine to save people's lives or keep them in a stable condition before human responders arrive. ## What it does MediFly allows the user to request for emergency help or medication such as an Epipen and Epinephrine. First you download the MediFly app and create a personal account. Then you can log into your account and use the features when necessary. If you are in an emergency, press the "EMERGENCY" button and a list of common medication options will appear for the person to pick from. There is also an option to search for your needed medication. Once a choice is selected, the local hospital will see the request and send a drone to deliver the medication to the person. Human first responders will also be called. The drone will have a GPS tracker and a GPS location of the person it needs to send the medication to. When the drone is within close distance to the person, a message is sent to tell them to go outside to where the drone can see the person. The camera will use facial recognition to confirm the person is indeed the registered user who ordered the medication. This level of security is important to ensure that the medication is delivered to the correct person. When the person is confirmed, the medication holding compartment lid is opened so the person can take their medication. ## How we built it On the software side, the front end of the app was made with react coded in Javascript, and the back end was made with Django in Python. The text messages work through Twilio. Twilio is used to tell the user that the drone is nearby with the medication ready to hand over. It sends a message telling the person to go outdoors where the drone will be able to find the user. On the hardware side, there are many different components that make up the drone. There are four motors, four propeller blades, a electronic speed controller, a flight controller, and 3D printed parts such as the camera mount, medication box holder, and some components of the drone frame. Besides this there is also a Raspberry Pi SBC attached to the drone for controlling the on-board systems such as the door to unload the cargo bay and stream the video to a server to process for the face recognition algorithm. ## Challenges we ran into Building the drone from scratch was a lot harder than we anticipated. There was a lot of setting up that needed to be done for the hardware and the building aspect was not easy. It consisted of a lot of taking apart, rebuilding, soldering, cutting, hot gluing, and rebuilding. Some of the video streaming systems did not work well at first, due to the CORS blocking the requests, given that we were using two different computers to run two different servers. Traditional geolocation techniques often take too long - as such, we needed to build a scheme to cache a user's location before they decided to send a request to prevent lag. Additionally, the number of pages required to build, stylize, and connect together made building the site a notable challenge of scale. ## Accomplishments that we're proud of We are extremely proud of the way the drone works and how it's able to move at quick, steady speeds while carrying the medication compartment and battery. On the software side, we are super proud of the facial recognition code and how it's able to tell the difference between different peoples' faces. The front and back end of the website/app is also really well done. We first made the front end UI design on Figma and then implemented the design on our final website. ## What we learned For software we learned how to use React, as well as various user authorization and authentication techniques. We also learned how to use Django. We learnt how to build an accurate, efficient and resilient face detection recognition and tracking system to make sure the package is always delivered to the correct person. We experimented with and learned various ways to stream real-time video over a network, also over longer ranges for the drone. For hardware we learned how to set up and construct a drone from scratch! ## What's next for MediFly In the future we hope to add a GPS tracker to the drone so that the person who orders the medication can see where the drone is on its path. We would also add Twilio text messages so that when the drone is within a close radius to the user, it will send a message notifying the person to go outside and wait for the drone to deliver the medication.
## Inspiration This project was inspired by a team member’s family, his grandparents always have to take medicine but often forget about it. Not only his grandparents forget the medicine also his mom. Although, his mom is very young but in a very fast paced society nowadays people always forget to do small things like taking their pills. Due to this inspiration, we decided to develop a pill reminder, but then we got inspired by a Tik Tok video about a person who has Parkinson’s disease and he couldn’t pick up an individual pill from the container. In end, we decide to create this project that will resolve the problem of people forgetting to take their pills as well as helping people to easily take individual pills. ## What it does Our project the Delta Dispenser uses an app to communicate with the database to set up a specific time to alert users to take their pills as well as tracking their pills information in the app. The hardware of Delta Dispenser will alert the user when the time is reached and automatically dispense the correct amount of the pills into the container. ## How we built it The frontend of the app is made with **Flutter**, the app communicates with a **firebase real-time database** to store medicinal and scheduling information for the user. The physical component uses an **embedded microcontroller** called an ESP-32 which we chose for its ability to connect to WiFi and be able to sync with the firebase database to know when to dispense the pills. ## Challenges we ran into The time constraint was definitely a big challenge and we accounted for that by deciding which features were most important in emphasizing our main idea for this project. These parts include the mechanical indexer of the pills, the interface the user would interact with, and how the database would look for communication with the app and the embedded device. ## Accomplishments that we're proud of We are most proud of how this project utilized many different aspects of engineering, from mechanical to electrical and software. Our team did a really good job at communicating throughout the design process which made integration at the end much easier. ## What we learned During this project, we had learned how to flutter to create a mobile app as well as learning how firebase works. Throughout this project, although we only learned a few skills that will be very useful in the future. The most important part was that we were able to develop upon the skills we already had. For example, now we are able to develop hardware that could communicate through firebase. ## What's next for Delta Dispenser The next steps for the Delta Dispenser include building a fully 3D printed prototype, along with the control box and hopper as shown in the CAD renders. On the software side, we would also like to add the ability for more complicated drug scheduling, while keeping the UI easy enough for anyone to set up. Having another portal that allows a doctor to directly input the information themselves is also a feature we are interested in having.
winning
## Inspiration The pandemic has ruined our collective health. Since its onset, rates of depression amongst teens and adults alike have risen to unprecedented levels. In a world where we are told for our own safety to keep distance from those who we cherish, how do we maintain these relationships and with it, our wellbeing? These problems were the inspiration for our solution – Dance Party. What better way to connect with loved ones, promote physical and mental health, then with a little dance and song? Join us as we use cutting edge technologies to build a brighter future, together. ## What it does Dance Party is a web application for anyone who is looking to have some fun dancing with friends. It is a cause for good times and good laughs. There is also a little bit of competition for when things get serious. The application is super user-friendly. Once you launch Dance Party and join with the same meeting ID, you can immediately pick a “Dance Leader” to lead the dance and your favorite songs, and then the fun begins. The ‘Dance Leader” leads the choreography and the rest of the group has to match what the “Dance Leader” is doing. You get accumulated points based on how closely you are following the routine in real time. At the end of the time limit, you get to see how you placed amongst your friends, and do it all over again! ## How we built it ### PoseNet Training Model We utilized a TensorFlow.js machine learning extension, called PoseNet, to generate the rigid body skeletons of the people in Dance Party. PoseNet can take a picture of individuals and return the data points in the form of x and y coordinates of the 17 major body parts. Using this data, we ran our scoring algorithm to match closeness of the users’ pose to the host’s post. Moreover, to ensure best performance of PoseNet, we performed cross-validation by running the model through various simulated trials and fine-tuned the hyperparameters to ensure the highest average confidence scores. ### Similarity Score and Scoring Algorithm For the scoring algorithm, we had to compare the rigid body skeletons of the client and the host. We superimposed and normalized each of the body parts through linear algebra, more specifically, through a linear transformation and a favorable change of basis, ensuring that we accounted for potential inversion of the camera and varying lengths of body parts. Once we had each body part superimposed, we simply compared the difference in degrees of each of the skeleton lines and generated a similarity score from 0 to 1. This frame by frame aggregate of score is then used to generate the closest match within a time range, and a score is then given to users who were the closest to the host’s dance moves. To factor in the issue of time lag between the host performing a move and the client copying the move, we cached the last 25 frames of data and used them to get a max score between the different poses. This allowed us to still credit the client, even if they were half a second behind in trying to copy the dance move. ### Frontend To build the frontend of our app we used React as several of our group members had heard a lot of the hype around it but had yet to try it. We used React to build the room selection page where users can enter in a room ID and username, as well as the Dance Room layout. We utilized webRTC for the group streaming of video, assisted by the Agora API, which handled a lot of the low level work necessary to handle group calling - including built in functionality, such as the video routing between different networks. We then intercepted the video stream and passed it to the PoseNet TensorFlow Model which handled single pose detection. In every Dance Room, there is one host and many participants. Hosts have access to additional settings, having the control to start and pause the Dance Game and the ability to reset the score of all members. If we had more time, we would have liked to flesh this out and create mini games that the host could choose for their room. ## Challenges we ran into One of the biggest challenges we ran into was in our implementation of Sockets. Sockets are a really powerful way to provide real time updates between a client (website) and the server. However, we were faced with a lot of debugging ‘finicky’ situations where our server was not correctly identifying who was trying to connect to it. It turned out to be a result of a 5 second default timeout value that Socket.Io had and the process-intensive task of pose detection. Recognizing this bug was one of the big ‘euphoria’ moments for the project. ## Accomplishments that we're proud of We’re super proud of our Similarity Score algorithm that we made from scratch. We were a little nervous when putting all of our pieces together, as things always work differently than they theoretically do, but we were pleasantly surprised when we first began testing to see that it worked at a high level of accuracy even at a basepoint, before fine tuning it. It is always great when things you conceptualize in your head come to fruition, and this project is a prime example of that. Going in, we knew this idea was going to be tough to implement, but after two all-nighters and a lot of confidence in each other, we were able to transform our imaginations to reality. ## What we learned We learned how to use websockets in conjunction with react to provide real time updates to all connected users in a given socket room. Additionally, we learned the greater lesson of the power of future planning. Our development process would have been a lot more smoother, if planned out, using diagrams and all, how parts would mesh together. ## What's next for Dance Party * Add Youtube/external dance routine functionality + The core functionality exists, it would be well within the realm of reason to add a YouTube stream and use our detection and similarity algorithms on it. * Improve performance * Host publically
## Inspiration The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way. ## What it does VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others. ## How we built it We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked. ## Challenges we ran into One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios. ## Accomplishments that we're proud of In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them. ## What we learned Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result! ## What's next for VenTalk There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
## Inspiration We spend roughly [1.5-2 hours](http://www.huffingtonpost.com/entry/smartphone-usage-estimates_us_5637687de4b063179912dc96) staring at our **phone screens** everyday. Its not uncommon to hear someone complain about neck pain or to notice a progressive slouching and hunchback developing among adults and teenagers today. The common way of holding our phones leads to an unattractive "the cellphone slump", and its no wonder seeing as there's as much as 60lbs of pressure on a persons neck when they're staring down at their phone. As a big believer and correct anatomical positioning, posture and the effects it can have on one's confidence and personality, I wanted to develop an app to help me remember to keep my phone at eye level and steer clear of the cellphone slump! ![Pressure on Neck](https://cdn2.omidoo.com/sites/default/files/imagecache/full_width/images/bydate/201508/headtiltcopy2.png) ## What it does The Posture app is pretty simple. Just like the blue light filters like f.lux and Twightlight we have on our phones, Posture runs in the background and gives you a small indicator if you've been holding your phone in a risky position for too long. ## How I built it With love, Android and caffeine! ## Challenges I ran into Understanding the Sensor API was definitely a challenge, but my biggest difficulty was learning how to run Services, background threads and processes without multiple Activities to rely on. ## Accomplishments that I'm proud of It's a fairly simple app, but I'm hopeful (and proud?) that it will help people who want to correct or improve their posture! ## What I learned I still have a lot to learn :) ## What's next for Posture Soon to be released in Beta on the Playstore :)
winning
## Inspiration /<http://www.firmoo.com/answer/tag_img/blurry-eyes.jpg> You know that someone is clearly in front of you. Who is it? You know that both Emily and Jane have shortish hair, a roundish face, but the world is much too blurry for you to correctly discern what is happening You wish there were a way for you to know instantly who was in front of you without always having to ask. But instead, you're left in constant darkness with what is going on around you. Because of that need, our team developed a cool and interesting way to connect the visually impaired with the world around them using Android phone technology and Facebook. ## What it does Spot-A-Friend allows visually impaired members of the community to better see the world around them. The process is simple. You open up the Spot-a-friend App and place the phone into the VR headset. From there, the view the user sees will be that of the Android camera view. The person can then look around and, when they feel that there are people in the area whom they can't identify, they can press a button on the headset and potentially utilize a voice command. "Spot a friend". From there, the friend is identified and their general location ## How we built it We built it by interfacing the Android and a computer back end system. The android app takes cares of the image by capturing it via button command. The entire system will be attached to a VR headset and mounted in the eye region. It interfaces via python flask back end that utilizes socket technology. It re-purposes Facebook's tagging algorithm to be used as a facial recognition. Then, the back end returns information on what the algorithm was able to identify. ## Challenges we ran into Not enough food and not enough sleep! Also trying to do facial detection while streaming the camera was difficult. Syncing up speech recognition and text to speech functions also posed a great challenge! ## Accomplishments that we're proud of Communication between the back end and the android app was difficult. Also, maintaining both systems in syncs and having the back end bypass csrf on Facebook's domain and internal endpoints was a huge success! ## What we learned Teamwork and how to balance work and play :) ! ## What's next for Spot-a-Friend Use Microsoft's Azure services to incorporate sentiment analysis so that the user can understand the context and emotion of the people around them. Additionally we would like to iron out the kinks with our speech recognition feature, and allow more commands from it. Perhaps, we can even reduce the form factor of the entire device so that people can wear it around with ease.
## Inspiration Inspired by expensive and bulky machinery, we sought out to find a way for individuals suffering from Parkinsons disease and other hand tremor conditions to have an affordable and easy to use therapy to manage their conditions. We built a game that gives user the task of drawing a straight line on their computer or television. Their hand movements are tracked using a LeapMotion and a Pebble smart watch. While playing the game, we track statistics and display easy to read graphs to track progress over time. The user can then see their progress over time and see how they improve, providing both encouragement and motivation to continue their treatment. Built using a wide variety of technologies, we believe that we have created a fun, easy access method of treatment for those who need it.
## Vision vs Reality We originally had a much more robust idea for this hackathon: an open vision doorbell to figure out who is at the door, without needing to go to the door. The plan was to use an Amazon Echo Dot to connect our vision solution, a Logitech c270 HD webcam, with our storage, a Firebase database. This final integration step between the Echo Dot and OpenCV services ended up being our downfall as the never-ending wave of vague errors kept getting thrown and we failed to learn how to swim. Instead of focusing on our downfalls, we want to show the progress that was made in the past 36 hours that we believe shows the potential behind what our idea sought to accomplish. Using OpenCV3 and Python3, we created multiple vision solutions such as motion detection and image detection. Ultimately, we decided that a facial recognition program would be ideal for our design. Our facial recognition program includes a vision model that has both Jet's and I's faces learned as well as an unknown catch-all type that aims to cover any unknown or masked faces. While not the most technically impressive, this does show the solid base work and the right step that we took to get to our initial idea. ## The Development Process These past 36 hours presented us with a lot full of trials and tribulations and it would be a shame if we did not mention them considering the result. In the beginning, we considered using the RaisingAI platform for our vision rather than OpenCV. However, when we attended their workshop, we saw that it relied on a Raspberry Pi which we originally wanted to avoid using due to our lack of server experience. Also, the performance seemed to vary and it did not seem like it was aimed for facial recognition. We planned and were excited to use a NVIDIA Jetson due to how great the performance is and we saw that the NVIDIA booth was using a Jetson to run a resource intensive vision program smoothly. Unfortunately, we could not get the Jetson setup due to a lack of a monitor. After not being able to successfully run the Jetson, we reluctantly switched to a Raspberry Pi but we were pleasantly surprised at how well it performed and how easily it was to setup without a monitor. At this stage is also when we started learning how to develop the Amazon Echo Dot. Since this was our first time ever using an Alexa-based device, it took a while to develop even a simple Hello, World! application. However, we definitely learned a lot about how smart devices work and got to work with many AWS utilities as a part of this development process. As a team, we knew that integrating the vision and Alexa would not be an easy task even at the start of the hackathon. Neither of us predicated just how difficult it would actually be. As a result, this vision-Alexa integration took up a majority of our overall development time. We also took on the task of integrating Firebase for storage at this step, but since this is the one technology in this project that we have had past experience with, we thought it would be no problem. ## What We Built At the end of the day (...more like morning), we were able to create a simple Python program and dataset that allows us to show off our base vision module. This comprises of 3 different programs: facial detection from a custom dataset of images, a ML model to associate facial features to a specific person, and applying that model to a live webcam feed. Additionally, we were also able to create our own Alexa skill that allowed us to dictate how we interact with the Echo. ## Accomplishments that I'm proud of * Learning how to use/create Amazon Skills * Getting our feet wet with an introduction to Raspberry Pi * Creating our own ML model * Utilizing OpenCV & Python to create a custom vision program ## Future Goals * Figure out how to integrate Alexa and Python programs * Seek mentor help in a more relaxed environment * Use a NVIDIA Jetson * Create a 3D printed housing for our product / overall final product refinement
partial
## Inspiration The inspiration for our project mainly stems from past as well as recent events involving sexual harassment, robbery, violence and several other forms of misconduct. The rate at which these misconducts happen is enormously high all over the world, and as students we receive 2-3 emails per day on an average, alerting us about robbery crimes. This brought us to thinking about a plausible solution to detect such crimes in real-time and help the victims before damage is done, while alerting others in the nearby area. ## What it does The objective of our project is to stream the videos recorded by surveillance cameras in real-time as dataset, feed it into our system, apply a Deep Learning model to detect suspicious activity or possible misconduct, and send alerts to nearby safety departments. ## How we built it 1.The Deep Learning model, we thought best suited the detection of suspicious behavior, was based on Convolutional Neural Networks since CNN makes an explicit assumption that the input is image. And breaking down video into image frames can be implemented. 2.The first neural network we used was convolutional, to extract high level features and reduce input complexity. For this, we used a pretrained model, called Inception by Google. Since Inception is trained on ImageNet that categorizes images into basic classes, we further used this model to apply the technique of Transfer Learning. This technique will perform classification of the videos into one of three categories, namely – Criminal activity, Potentially suspicious, Safe. ## Challenges we ran into Biggest hurdle we faced was while using Inception for implementing our first network. There are two ways we figured out, one was to construct the model using Tensorflow but we noticed the documentation was not proper. Second was to use an existing API, but it didn't solve our purpose to get features of the input. ## Accomplishments that we're proud of This was our first time dabbling with Deep Learning models, and we were able to get acquainted with which models are best suited to solve our problem as there's plenty of algorithms for video/image classification but choosing the best suited model for our specific problem is a crucial step. ## What we learned We learnt tons about Deep Learning models, such as CNN, RNN(LSTM), Transfer Learning, Python libraries - Tensorflow. ## What's next for Eagle-eye This was an ambitious project to implement for a 36-hours Hackathon. We were able to understand the methodology to solve the problem, but were short on time to provide a fully-functional solution. The next step for Eagle-eye is to implement a system for the same, that can detect safety threats in videos in real-time and solve our objective.
## Inspiration When police are looking for a criminal through monitors and cameras records. The information is far separate from each other. A criminal could appear in distinct camera in the building or city. This inspired us to create an efficient application to trace and predict the future direction of criminal. ## What it does The project allowed the user to upload video files, decide numbers of cameras, precision level. Then it analyses if there are elements required by the user appeared in the video file. A default map of the building has been constructed using React. The user can design a customized two-dimension map for every place where the camera is located. Our App will trace the route of the element, which is targeted by our user, and the time when the camera captures the element will be presented in a table for further investment. ## How I built it The app is based on React Javascript, using the Clarifai api to recognize and trace a target in real time. The tracing result is shown in a descriptive map. A line will be shown to indicate the route of our target and the time it is observed under the camera. ## Challenges I ran into There are a lot of challenges we had encountered . The most difficult challenge for us is to detect the required element from video. We start with detecting the requird elements for a picture, and applied the methodology to video by diving video into frames. In the end, we are able to detect whether the person appeared in the video, ## Accomplishments that I'm proud of The accomplishments that I'm proud of is our implementation of website using React Javascript, which combined our efforts in applying Clarifia app, uploading video, decide whether there is person in the video and decide the path between person. ## What I learned We have learned so much from this experience. All of us were able to learn React JavaScript from brand new and develop a project from it. We enjoyed working in a team environment where everyone participate in the project responsibly . All the suggestion we had received are exceptionally valuable. ## What's next for Path\_Tracing There are a few things we can improve in the future: 1. Single target: We can have another mode to detect two type of object or multiple types of object as long as the API supports. 2. Specified target: in the future if we have enough resources. We can apply the customized training for user's own model to find a specific target. This can be achieved by training the API customized module. For e.g: we can use this to find a lost person/kid by access every camera in the city. And used the camera record to trace the route of the person. This relies on a large number of picture of the targeted object. 3. Smart distinguish: we will combine the result of different modules to distinguish the target details. This can highly improve the actual precision. If the user inputs a complicated item which is not existing in API’s database. Our app will use the description of the item like color, shape, pattern to search close items. 4. we want our app not only indicates the route of the targeted object under the camera, but it should also be able to give a possible/shortest route of the target when it is moving in an unmonitored area. We can do this by embed google map or the detailed building Blueprint in our application.
## The Gist We combine state-of-the-art LLM/GPT detection methods with image diffusion models to accurately detect AI-generated video with 92% accuracy. ## Inspiration As image and video generation models become more powerful, they pose a strong threat to traditional media norms of trust and truth. OpenAI's SORA model released in the last week produces extremely realistic video to fit any prompt, and opens up pathways for malicious actors to spread unprecedented misinformation regarding elections, war, etc. ## What it does BinoSoRAs is a novel system designed to authenticate the origin of videos through advanced frame interpolation and deep learning techniques. This methodology is an extension of the state-of-the-art Binoculars framework by Hans et al. (January 2024), which employs dual LLMs to differentiate human-generated text from machine-generated counterparts based on the concept of textual "surprise". BinoSoRAs extends on this idea in the video domain by utilizing **Fréchet Inception Distance (FID)** to compare the original input video against a model-generated video. FID is a common metric which measures the quality and diversity of images using an Inception v3 convolutional neural network. We create model-generated video by feeding the suspect input video into a **Fast Frame Interpolation (FLAVR)** model, which interpolates every 8 frames given start and end reference frames. We show that this interpolated video is more similar (i.e. "less surprising") to authentic video than artificial content when compared using FID. The resulting FID + FLAVR two-model combination is an effective framework for detecting video generation such as that from OpenAI's SoRA. This innovative application enables a root-level analysis of video content, offering a robust mechanism for distinguishing between human-generated and machine-generated videos. Specifically, by using the Inception v3 and FLAVR models, we are able to look deeper into shared training data commonalities present in generated video. ## How we built it Rather than simply analyzing the outputs of generative models, a common approach for detecting AI content, our methodology leverages patterns and weaknesses that are inherent to the common training data necessary to make these models in the first place. Our approach builds on the **Binoculars** framework developed by Hans et al. (Jan 2024), which is a highly accurate method of detecting LLM-generated tokens. Their state-of-the-art LLM text detector makes use of two assumptions: simply "looking" at text of unknown origin is not enough to classify it as human- or machine-generated, because a generator aims to make differences undetectable. Additionally, *models are more similar to each other than they are to any human*, in part because they are trained on extremely similar massive datasets. The natural conclusion is that an observer model will find human text to be very *perplex* and surprising, while an observer model will find generated text to be exactly what it expects. We used Fréchet Inception Distance between the unknown video and interpolated generated video as a metric to determine if video is generated or real. FID uses the Inception score, which calculates how well the top-performing classifier Inception v3 classifies an image as one of 1,000 objects. After calculating the Inception score for every frame in the unknown video and the interpolated video, FID calculates the Fréchet distance between these Gaussian distributions, which is a high-dimensional measure of similarity between two curves. FID has been previously shown to correlate extremely well with human recognition of images as well as increase as expected with visual degradation of images. We also used the open-source model **FLAVR** (Flow-Agnostic Video Representations for Fast Frame Interpolation), which is capable of single shot multi-frame prediction and reasoning about non-linear motion trajectories. With fine-tuning, this effectively served as our generator model, which created the comparison video necessary to the final FID metric. With a FID-threshold-distance of 52.87, the true negative rate (Real videos correctly identified as real) was found to be 78.5%, and the false positive rate (Real videos incorrectly identified as fake) was found to be 21.4%. This computes to an accuracy of 91.67%. ## Challenges we ran into One significant challenge was developing a framework for translating the Binoculars metric (Hans et al.), designed for detecting tokens generated by large-language models, into a practical score for judging AI-generated video content. Ultimately, we settled on our current framework of utilizing an observer and generator model to get an FID-based score; this method allows us to effectively determine the quality of movement between consecutive video frames through leveraging the distance between image feature vectors to classify suspect images. ## Accomplishments that we're proud of We're extremely proud of our final product: BinoSoRAs is a framework that is not only effective, but also highly adaptive to the difficult challenge of detecting AI-generated videos. This type of content will only continue to proliferate the internet as text-to-video models such as OpenAI's SoRA get released to the public: in a time when anyone can fake videos effectively with minimal effort, these kinds of detection solutions and tools are more important than ever, *especially in an election year*. BinoSoRAs represents a significant advancement in video authenticity analysis, combining the strengths of FLAVR's flow-free frame interpolation with the analytical precision of FID. By adapting the Binoculars framework's methodology to the visual domain, it sets a new standard for detecting machine-generated content, offering valuable insights for content verification and digital forensics. The system's efficiency, scalability, and effectiveness underscore its potential to address the evolving challenges of digital content authentication in an increasingly automated world. ## What we learned This was the first-ever hackathon for all of us, and we all learned many valuable lessons about generative AI models and detection metrics such as Binoculars and Fréchet Inception Distance. Some team members also got new exposure to data mining and analysis (through data-handling libraries like NumPy, PyTorch, and Tensorflow), in addition to general knowledge about processing video data via OpenCV. Arguably more importantly, we got to experience what it's like working in a team and iterating quickly on new research ideas. The process of vectoring and understanding how to de-risk our most uncertain research questions was invaluable, and we are proud of our teamwork and determination that ultimately culminated in a successful project. ## What's next for BinoSoRAs BinoSoRAs is an exciting framework that has obvious and immediate real-world applications, in addition to more potential research avenues to explore. The aim is to create a highly-accurate model that can eventually be integrated into web applications and news articles to give immediate and accurate warnings/feedback of AI-generated content. This can mitigate the risk of misinformation in a time where anyone with basic computer skills can spread malicious content, and our hope is that we can build on this idea to prove our belief that despite its misuse, AI is a fundamental force for good.
losing
## Inspiration Education is extremely important and especially for kids who are still developing their basic skills. When it comes to school, many kids dislike the idea of doing homework and solving math problems, but kids love to play games. Introducing Excel Beyond, a game where you solve math questions to earn points towards the growth of your tree and your education. ## What it does Excel Beyond is an Android app that produces short math problems for children of different grades levels from K-6. The math problems that kids have to solve can be simply numbers to be added, subtracted, multiplied, or divided. As difficulty increases, math problems will grow in difficulty with larger numbers, pictures, and worded problems. With each question that the child finishes, they accumulate points which contribute to the overall growth of a plant that they can view in the app. ## How I built it I built Excel Beyond using Android Studios. I wrote the backend using Java. The stages of growth of the plant was obtained from sprite images found online. ## Accomplishments that I'm proud of I am proud that I could make a functioning game that a kid would be able to play on their android device. ## What's next for Excel Beyond The next steps for Excel Beyond is to create a login system with usernames and passwords and a scoreboard for the kids to compete with one another.
## Inspiration I was inspired by the website that we all know and love, CoolMathGames.com. While it seems like there are multiple apps designed to elevate our brain and cognitive skills, I felt like going back to the roots- the origin- of it all. ## What it does Through engaging mini-games, the player is able to accumulate points and master mathematical subjects with these math-centered challenges. ## How we built it I was able to build a design of what the app would look like through Figma. ## Challenges we ran into It was difficult to work by myself with so much to do. ## Accomplishments that we're proud of I am proud of building the cute otter and learning how to use Figma because I typically use canva. ## What we learned I learned that I should and that I should find a reliable team beforehand. ## What's next for Math Madness Once this app goes live, I would hope to increase its range by adding more levels, add more subjects of academics so that it is not only math, and that I can include an explanation tab.
## Inspiration We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app. ## What it does Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending. ## How we built it We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data. ## Challenges we ran into To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive. ## Accomplishments that we're proud of We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo. ## What we learned We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app. ## What's next for Budge We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending.
losing
## Inspiration We have to wait 4-5 years to elect an official or person in power who may potentially not achieve anything meaningful during his time. This simple addition to Envel puts the power and ownership of the community back into their hands. A single dollar may not achieve much, but a million one dollars will. ## What it does Provides an option for Envel users to donate residue cash from daily budget to a local charity ## How we built it Our concept design was built through photoshop and premiere pro. Photoshop allowed us to easily manipulate and experiment with potential UI elements. Then we brought life to our static ideas by animating them in premiere pro. This allowed us to experience the possible user experience. ## Challenges we ran into Designing an attractive and alluring interface was our primary challenge. Incentivizing users to participate in activities charitable in nature is inherently difficult. As a result, we ran through many designs until we felt like the entire process was as frictionless and as inviting as possible. ## Accomplishments that we're proud of Our team was able to conceptualize an application function and effectively materialize the concepts within our very strict time constraints. ## What we learned What businesses do is not single faceted. With the current ever-evolving industry, businesses must adapt and grasp these facets in order to survive. ## What's next for The power of the dollar Currently, our Design concept is still only a concept. Our next step would be to work with backend developers to actually program our design to the application and tackle any potential problems that we may encounter.
## Inspiration This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen. ## What it does It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall. ## How we built it We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident. ## Challenges we ran into Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax. To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were: 1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact). 2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact). ## Accomplishments that we're proud of We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole. ## What we learned We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process. ## What's next for VATS Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
## Inspiration Our inspiration, we all have very busy and fast paced lives, sometimes it can be hard to make time for doing things that we care about like giving back to the community and donating to charity. We wanted to make donating to a charity of your choice simple and painless. ## What it does Our app takes the transaction history of a users debit or credit card purchases and rounds up their purchases, and then charges them the difference of the rounded up amount and the price of the goods and places that money into a pot to be donated to charity. ## How I built it The way that we went about building the app is by using React-Native for the front-end and then using Node.js/Express.js for the back-end, and connecting it to the Plaid API to get the users transaction history from their bank accounts. We then take the transaction history and find out how much the user should be charged by rounding the purchases up and then subtracting the difference. The user would interact with our front-end allowing them see the amount of money they have donated per day, week, month, and year. ## Challenges I ran into Challenges we ran into when working on this app is git we had issues with version control, as well as with stripe due to ACH payments integrations. We also had issues with react-native navigation, trying to fit all the moving pieces together. ## Accomplishments that I'm proud of We're proud of getting everything we had planned for the front-end and most of the back-end up and running! ## What I learned As a team we all learned different skills, collectively we all learned to work as a team, individually we learned different skills outside of our comfort zones among other things such as the react-native ecosystem, learning how to bring designs to life as well as connecting them to multiple third party APIs. ## What's next for Give For give we plan to add more features to the app that we think will be useful to users as well as publish the app so that people can actually start using the app!
winning
## About the Project NazAR is an educational tool that automatically creates interactive visualizations of math word problems in AR, requiring nothing more than an iPhone. ## Behind the Name *Nazar* means “vision” in Arabic, which symbolizes the driving goal behind our app – not only do we visualize math problems for students, but we also strive to represent a vision for a more inclusive, accessible and tech-friendly future for education. And, it ends with AR, hence *NazAR* :) ## Inspiration The inspiration for this project came from each of our own unique experiences with interactive learning. As an example, we want to showcase two of the team members’ experiences, Mohamed and Rayan’s. Mohamed Musa moved to the US when he was 12, coming from a village in Sudan where he grew up and received his primary education. He did not speak English and struggled until he had an experience with a teacher that transformed his entire learning experience through experiential and interactive learning. From then on, applying those principles, Mohamed was able to pick up English fluently within a few months and reached the top of his class in both science and mathematics. Rayan Ansari had worked with many Syrian refugee students on a catch-up curriculum. One of his students, a 15 year-old named Jamal, had not received schooling since Kindergarten and did not understand arithmetic and the abstractions used to represent it. Intuitively, the only means Rayan felt he could effectively teach Jamal and bridge the connection would be through physical examples that Jamal could envision or interact with. From the diverse experiences of the team members, it was glaringly clear that creating an accessible and flexible interactive learning software would be invaluable in bringing this sort of transformative experience to any student’s work. We were determined to develop a platform that could achieve this goal without having its questions pre-curated or requiring the aid of a teacher, tutor, or parent to help provide this sort of time-intensive education experience to them. ## What it does Upon opening the app, the student is presented with a camera view, and can press the snapshot button on the screen to scan a homework problem. Our computer vision model then uses neural network-based text detection to process the scanned question, and passes the extracted text to our NLP model. Our NLP text processing model runs fully integrated into Swift as a Python script, and extracts from the question a set of characters to create in AR, along with objects and their quantities, that represent the initial problem setup. For example, for the question “Sally has twelve apples and John has three. If Sally gives five of her apples to John, how many apples does John have now?”, our model identifies that two characters should be drawn: Sally and John, and the setup should show them with twelve and three apples, respectively. The app then draws this setup using the Apple RealityKit development space, with the characters and objects described in the problem overlayed. The setup is interactive, and the user is able to move the objects around the screen, reassigning them between characters. When the position of the environment reflects the correct answer, the app verifies it, congratulates the student, and moves onto the next question. Additionally, the characters are dynamic and expressive, displaying idle movement and reactions rather than appearing frozen in the AR environment. ## How we built it Our app relies on three main components, each of which we built from the ground up to best tackle the task at hand: a computer vision (CV) component that processes the camera feed into text: an NLP model that extracts and organizes information about the initial problem setup; and an augmented-reality (AR) component that creates an interactive, immersive environment for the student to solve the problem. We implemented the computer vision component to perform image-to-text conversion using the Apple’s Vision framework model, trained on a convolutional neural network with hundreds of thousands of data points. We customize user experience with a snapshot button that allows the student to position their in front of a question and press it to capture an image, which is then converted to a string, and passed off to the NLP model. Our NLP model, which we developed completely from scratch for this app, runs as a Python script, and is integrated into Swift using a version of PythonKit we custom-modified to configure for iOS. It works by first tokenizing and lemmatizing the text using spaCy, and then using numeric terms as pivot points for a prioritized search relying on English grammatical rules to match each numeric term to a character, an object and a verb (action). The model is able to successfully match objects to characters even when they aren’t explicitly specified (e.g. for Sally in “Ralph has four melons, and Sally has six”) and, by using the proximate preceding verb of each numeric term as the basis for an inclusion-exclusion criteria, is also able to successfully account for extraneous information such as statements about characters receiving or giving objects, which shouldn’t be included in the initial setup. Our model also accounts for characters that do not possess any objects to begin with, but who should be drawn in the display environment as they may receive objects as part of the solution to the question. It directly returns filenames that should be executed by the AR code. Our AR model functions from the moment a homework problem is read. Using Apple’s RealityKit environment, the software determines the plane of the paper in which we will anchor our interactive learning space. The NLP model passes objects of interest which correspond to particular USDZ assets in our library, as well as a vibrant background terrain. In our testing, we used multiple models for hand tracking and gesture classification, including a CoreML model, a custom SDK for gesture classification, a Tensorflow model, and our own gesture processing class paired with Apple’s hand pose detection library. For the purposes of Treehacks, we figured it would be most reasonable to stick with touchscreen manipulation, especially for our demo that utilizes the iPhone device itself without being worn with a separate accessory. We found this to also provide better ease of use when interacting with the environment and to be most accessible, given hardware constraints (we did not have a HoloKit Apple accessory nor the upcoming Apple AR glasses). ## Challenges we ran into We ran into several challenges while implementing our project, which was somewhat expected given the considerable number of components we had, as well as the novelty of our implementation. One of the first challenges we had was a lack of access to wearable hardware, such as HoloKits or HoloLenses. We decided based on this, as well as a desire to make our app as accessible and scalable as possible without requiring the purchase of expensive equipment by the user, to be able to reach as many people who need it as possible. Another issue we ran into was with hand gesture classification. Very little work has been done on this in Swift environments, and there was little to no documentation on hand tracking available to us. As a result, we wrote and experimented with several different models, including training our own deep learning model that can identify gestures, but it took a toll on our laptop’s resources. At the end we got it working, but are not using it for our demo as it currently experiences some lag. In the future, we aim to run our own gesture tracking model on the cloud, which we will train on over 24,000 images, in order to provide lag-free hand tracking. The final major issue we encountered was the lack of interoperability between Apple’s iOS development environment and other systems, for example with running our NLP code, which requires input from the computer vision model, and has to pass the extracted data on to the AR algorithm. We have been continually working to overcome this challenge, including by modifying the PythonKit package to bundle a Python interpreter alongside the other application assets, so that Python scripts can be successfully run on the end machine. We also used input and output to text files to allow our Python NLP script to more easily interact with the Swift code. ## Accomplishments we're proud of We built our computer vision and NLP models completely from the ground up during the Hackathon, and also developed multiple hand-tracking models on our own, overcoming the lack of documentation for hand detection in Swift. Additionally, we’re proud of the novelty of our design. Existing models that provide interactive problem visualization all rely on custom QR codes embedded with the questions that load pre-written environments, or rely on a set of pre-curated models; and Photomath, the only major app that takes a real-time image-to-text approach, lacks support for word problems. In contrast, our app integrates directly with existing math problems, and doesn’t require any additional work on the part of students, teachers or textbook writers in order to function. Additionally, by relying only on an iPhone and an optional HoloKit accessory for hand-tracking which is not vital to the application (which at a retail price of $129 is far more scalable than VR sets that typically cost thousands of dollars), we maximize accessibility to our platform not only in the US, but around the world, where it has the potential to complement instructional efforts in developing countries where educational systems lack sufficient resources to provide enough one-on-one support to students. We’re eager to have NazAR make a global impact on improving students’ comfortability and experience with math in coming years. ## What we learned * We learnt a lot from building the tracking models, which haven’t really been done for iOS and there’s practically no Swift documentation available for. * We are truly operating on a new frontier as there is little to no work done in the field we are looking at * We will have to manually build a lot of different architectures as a lot of technologies related to our project are not open source yet. We’ve already been making progress on this front, and plan to do far more in the coming weeks as we work towards a stable release of our app. ## What's next for NazAR * Having the app animate the correct answer (e.g. Bob handing apples one at a time to Sally) * Animating algorithmic approaches and code solutions for data structures and algorithms classes * Being able to automatically produce additional practice problems similar to those provided by the user * Using cosine similarity to automatically make terrains mirror the problem description (e.g. show an orchard if the question is about apple picking, or a savannah if giraffes are involved) * And more!
## Inspiration Cliff is dyslexic, so reading is difficult and slow for him and makes school really difficult. But, he loves books and listens to 100+ audiobooks/yr. However, most books don't have an audiobook, especially not textbooks for schools, and articles that are passed out in class. This is an issue not only for the 160M people in the developed world with dyslexia but also for the 250M people with low vision acuity. After moving to the U.S. at age 13, Cliff also needed something to help him translate assignments he didn't understand in school. Most people become nearsighted as they get older, but often don't have their glasses with them. This makes it hard to read forms when needed. Being able to listen instead of reading is a really effective solution here. ## What it does Audiobook maker allows a user to scan a physical book with their phone to produced a digital copy that can be played as an audiobook instantaneously in whatever language they choose. It also lets you read the book with text at whatever size you like to help people who have low vision acuity or are missing their glasses. ## How we built it In Swift and iOS using Google ML and a few clever algorithms we developed to produce high-quality scanning, and high quality reading with low processing time. ## Challenges we ran into We had to redesign a lot of the features to make the app user experience flow well and to allow the processing to happen fast enough. ## Accomplishments that we're proud of We reduced the time it took to scan a book by 15X after one design iteration and reduced the processing time it took to OCR (Optical Character Recognition) the book from an hour plus, to instantaneously using an algorithm we built. We allow the user to have audiobooks on their phone, in multiple languages, that take up virtually no space on the phone. ## What we learned How to work with Google ML, how to work around OCR processing time. How to suffer through git Xcode Storyboard merge conflicts, how to use Amazon's AWS/Alexa's machine learning platform. ## What's next for Audiobook Maker Deployment and use across the world by people who have Dyslexia or Low vision acuity, who are learning a new language or who just don't have their reading glasses but still want to function. We envision our app being used primarily for education in schools - specifically schools that have low-income populations who can't afford to buy multiple of books or audiobooks in multiple languages and formats. ## Treehack themes treehacks education Verticle > personalization > learning styles (build a learning platform, tailored to the learning styles of auditory learners) - I'm an auditory learner, I've dreamed a tool like this since the time I was 8 years old and struggling to learn to read. I'm so excited that now it exists and every student with dyslexia or a learning difference will have access to it. treehacks education Verticle > personalization > multilingual education ( English as a second-language students often get overlooked, Are there ways to leverage technology to create more open, multilingual classrooms?) Our software allows any book to become polylingual. treehacks education Verticle > accessibility > refugee education (What are ways technology can be used to bring content and make education accessible to refugees? How can we make the transition to education in a new country smoother?) - Make it so they can listen to material in their mother tongue if needed. or have a voice read along with them in English. Make it so that they can carry their books wherever they go by scanning a book once and then having it for life. treehacks education Verticle >language & literacy > mobile apps for English literacy (How can you build mobile apps to increase English fluency and literacy amongst students and adults?) -One of the best ways to learn how to read is to listen to someone else doing it and to follow yourself. Audiobook maker lets you do that. From a practical perspective - learning how to read is hard and it is difficult for an adult learning a new language to achieve proficiency and a high reading speed. To bridge that gap Audiobook Maker makes sure that every person can and understand and learn from any text they encounter. treehacks education Verticle >language & literacy > in-person learning (many people want to learn second languages) - Audiobook maker allows users to live in a foreign countrys and understand more of what is going on. It allows users to challenge themselves to read or listen to more of their daily work in the language they are trying to learn, and it can help users understand while they studying a foreign language in the case that the meaning of text in a book or elsewhere is not clear. We worked a lot with Google ML and Amazon AWS.
## Inspiration We all love learning languages, but one of the most frustrating things is seeing an object that you don't know the word for and then trying to figure out how to describe it in your target language. Being students of Japanese, it is especially frustrating to find the exact characters to describe an object that you see. With this app, we want to change all that. Huge advances have been made in computer vision in recent years that have allowed us to accurately detect all kinds of different image matter. Combined with advanced translation software, we found the perfect recipe to make an app that could capitalize upon these technologies and help foreign languages students all around the world. ## What it does The app allows you to either take a picture of an object or scene with your iPhone camera or upload an image from your photo library. You then select a language that would like to translate words into. The app then remotely contacts the Microsoft Azure Cognitive Services using an HTTP request from within the app to create tags from the image you uploaded. These tags are then uploaded to the Google Cloud Platform services to translate those tags into your target language. After doing this, a list of english-foreign language word pairs is displayed, relating to the image tags. ## How we built it The app was built using Xcode and was coded in Swift. We split up to work on different parts of the project. Kent worked on interfacing with Microsoft's computer vision AI and created the basic app structure. Isaiah worked on setting up Google Cloud Platform translation and contributed to adding functionality for multiple languages. Ivan worked on designing the logo for the app and most of the visuals. ## Challenges we ran into A lot of time was spent figuring out how to deal with HTTP requests and json, two things none of us have much experience with, and then using them in swift to contact remote services through our app. After this major hurdle was overcome, there was a concurrency issue as both the vision AI and translation requests were designed to be run in parallel to the main thread of the app's execution, however this created some problems for updating the app's UI. We ended up fixing all the issues though! ## Accomplishments that we're proud of We are very proud that we managed to utilize some really awesome cloud services like Microsoft Azure's Cognitive Services and Google Cloud Platform, and are happy that we managed to create an app that worked at the end of the day! ## What we learned This was a great learning experience for all of us, both in terms of the technical skills we acquired in connecting to cloud services and in terms of the teamwork skills we acquired. ## What's next for Literal Firstly, we would add more languages to translate and make much cleaner UI. Then we would enable it to run on cloud services indefinitely instead of just on a temporary treehacks-based license. After that, there are many more cool ideas that we could implement into the app!
winning
# So Many Languages Web application to help convert one programming language's code to another within seconds while also enabling the user to generate code using just logic. ## Inspiration Our team consists of 3 developers and all of us realised that we face the same problem- it's very hard to memorise all syntaxes since each language has its own different syntax. This not only causes confusion but also takes up a lot of our time. ## What it does So Many Languages has various features to motivate students to learn competitive coding while also making the process easier. SML helps: 1) Save time 2) Immediate language conversion 3) One of its kind language freedom 4) Voice to code templating 5) Code accurately 6) Code programs by just knowing the logic (no need to remember syntaxes) 7) Take tests and practice while also earning rewards for the same ## How to run ``` 1) git clone https://github.com/akshatvg/So-Many-Languages 2) pip install -r requirements.txt 3) python3 run.py ``` ## How to use 1) Run the software as mentioned above. 2) Use the default page to upload code of a programming language to be converted into any of the other listed languages in the dropdown menu. 3) Use the Voice to Code Templating page to give out intents to be converted into code. eg: "Open C++", "Show me how to print a statement", etc. 4) Use the Compete and Practice page to try out language specific programs to test out how much you learnt, compete against your peers and earn points. 5) Use the Rewards page to redeem the earnt Points. ## Its advantage 1) Run the code from the compiler to get desired result in the same place. 2) Easy to use and fast processing. 3) Save time from scrolling through Google searching for different answers and syntaxes by having everything come up on its own in one single page. 4) Learn and earn at the same time through the Compete and Rewards page. ## Target audience Students- learning has no age & developers need to keep learning to stay updated with trends. ## Business model We intend to provide free code templating and conversion for particular common languages like C++, Python, Java, etc and have paid packs for exclusive languages like Swift, PHP, JavaScript, etc. ## Marketing strategy 1) For every referral, points will be earned which help purchase premium and exclusive language packs once enough points are saved. These points can also be used to purchase schwags. 2) Schwags and discount benefits for Campus Ambassadors in different Universities and Colleges. ## How we built it We built the assistive educative technology using: 1) HTML/ CSS/ JavaScript/ Bootstrap (Frontend Web Development), 2) Flask (Backend Web Development), 3) IBM Watson (To Gather User's Intent- NLU), 4) PHP, C++, Python (Test Programming Languages). ## Challenges we ran into Other than the jet lag we still have from travelling all the way from India and hence lack of sleep, we came across a few technical challenges too. Creating algorithms to convert PHP code wasn't very easy at first, but we managed to pull it off in the end. ## Accomplishments that we're proud of Creating a one of its kind product. 1) We are the first ever educative technological assistant to help learn and migrate to programming languages while also giving users a platform to practice and test how much they learnt using language specific problems. 2) We also help users completely convert one language's code to another language's code accurately within seconds. ## What we learned This was our team's first international hackathon. We met hundreds of inspiring coders and developers who tried and tested our product and gave their views and suggestions which we then tried implementing. We saw how other teams functioned and what we may have been doing wrong before. We also each learnt a technical skill for the project (Akshat learnt Animations and basics of Flask, Anand learnt using IBM Watson to its greatest extent and Sandeep learnt PHP just to implement it into this project). ## What's next for So Many Languages We intend to add support for more programming languages as soon as possible while also making sure that any upcoming bugs are rectified.
## Inspiration With a variety of language learning resources out there, we set out to create a tool that can help us practice a part of language that keeps things flowing -- conversation! LanguaLine aims to empower users to speak their foreign language by helping them develop their conversational skills. ## What it does We wanted to create an interface that can help users practice speaking a foreign language. Through LanguaLine, users can: * Select a language they wish to practice speaking in * Select how "motivating" they want their Mentor to be (Basic is normal, Extra Motivation is a tough love approach). * Enter their phone number and receive a **phone call** from our AI Mentor * Answer questions posed by the Mentor * Receive **real-time feedback** about their performance * View a transcript and summary of the call after the conversation is completed * View a **generalized report** on user's language strengths and weaknesses across all conversations ## How we built it We used `React.js` for our frontend, and `Firebase` for our database. To style our components, we utilized `TailwindCSS` and `React MaterialUI`. Our backend system is comprised of `Node.js` and `Express.js`, which we use to make calls to Google's **Gemini** model. To create, tune, and prompt engineer our AI Mentor, we used the **VAPI.ai** API. Our **transcriber model** is Deepgram's **nova-2 multi** and our **model** is **gpt-4o-mini** provided by **Open.AI**. We are also using **Gemini** to implement Retrieval-Augmented Generation (RAG). We use past call transcripts and summaries to train and optimize our model. This training data is maintained in our `Firebase` database. In addition, this feature analyzes users' call transcripts and generates a report identifying strengths and weaknesses in their speaking skills. ## Challenges we ran into Our biggest challenge was understanding the VAPI documentation, as it was our first time working with a voice AI API. We had to make a few changes to our project stack to accommodate for VAPI, as we could only make client-side API calls. Since the majority of our team has limited experience working with LLMs and voice AI Agents, we faced some difficulties prompt engineering our Mentor, requiring us to tweak various model parameters and experiment through VAPI's dashboard. ## Accomplishments that we're proud of The turning point in our development process was when we were able to start conversing with our Mentor. After this was solidified, our project trajectory only went upwards. We're proud of the fact we were able to turn this idea into an operational and functional application. ## What we learned The team behind LanguaLine had a variety of skill levels; for some, this was their first project using this tech stack, while for others, this was familiar. Some of us mastered the ability to send API calls and parse JSON data. Some of us also learned how to prompt engineer for a particular language choice. There were lessons being learned all throughout the 36 hours of development, which helped us feel connected to the project and motivated to keep creating. ## What's next for LanguaLine Our biggest goal is to **deploy and market** this project. Being language learners ourselves, having a service like LanguaLine is invaluable to making progress toward achieving fluency. In addition, this increases **accessibility** for language learners by encompassing a wide range of supported languages and providing customizable support. Our project aims to support all languages. Due to our lack of control regarding the accuracy across various languages within LLMs, this feature needs more testing and tuning to be perfected. We plan to offer **more variation** in the Mentors we offer. Right now, we only offer Mentors based off of a language choice and motivation level. In the future, we plan to include language difficulties, personalities, a wider variety of supported languages, custom prompts, and scheduled calling. We also plan to offer improvement plans for grammar, pronunciation, and vocabulary, as well as a scoring system for users' performances during Mentor sessions.
## Inspiration Our inspiration for developing the Hospital Inventory Manager stemmed from the pressing need for efficient healthcare resource management and improved patient outcomes, particularly in the context of organ donation and allocation. ## What it does The Hospital Inventory Manager is a comprehensive platform designed to streamline various aspects of hospital resource management, including inventory tracking, scheduling, patient management, and organ matching. The system incorporates QR code functionality for quick access to vital information. ## How we built it * **PostgreSQL**: Utilized as the database to store all our data. * **Pandas**: Employed for data manipulation and analysis. * **QR Code Generation Library**: Used for creating QR codes for inventory items and patient identification. * **Git**: Implemented for version control and collaborative development. ## Challenges we ran into We faced several challenges, including: * Integrating multiple complex systems (inventory, scheduling, patient management) into a cohesive platform. * Ensuring data privacy and security, particularly regarding sensitive patient information. * Implementing an efficient organ matching algorithm. * Creating a user-friendly interface capable of handling complex operations. * Managing real-time updates across different sections of the application. ## Accomplishments that we're proud of We take pride in several achievements: * Developing a fully functional prototype that addresses various aspects of hospital resource management. * Implementing QR code functionality for quick access to information. * Creating a system that can potentially save lives by enhancing organ donation and allocation efficiency. * Building a scalable solution adaptable for various healthcare settings. ## What we learned Through this project, we gained valuable insights: * The complexities of healthcare logistics and the critical need for efficient resource management in hospitals. * The integration of multiple Python libraries and technologies to create a comprehensive web application. * The significance of user experience design in healthcare applications. * Techniques for securely handling sensitive data. * The importance of iterative development and continuous testing in building robust software. ## What's next for CareCost Moving forward, we plan to: * Implement advanced predictive analytics for inventory management. * Enhance the organ matching algorithm using machine learning techniques. * Develop a mobile application for on-the-go access. * Integrate with existing hospital information systems for seamless data exchange. * Conduct pilot tests in real hospital environments to gather feedback and refine the system. * Explore the use of blockchain for secure and transparent organ donation tracking. * Expand the platform to include features for blood bank management and vaccine distribution. We believe that with further development and refinement, the Hospital Inventory Manager has the potential to significantly enhance healthcare resource management and improve patient outcomes.
partial
## Inspiration Struggling with a hackathon concept made our team realize the importance of gathering inspiration and connecting ideas. This led us to further research products in the market that are aimed towards this space, and we found many pain points that made the process less efficient, including an influx of onboarding processes, new tools, and distracting interfaces. In many cases, people use multiple platforms to gather various kinds of inspiration ranging from Pinterest, Notes, etc. This inspired our team to create a product that addresses these pain points and make a more user-friendly experience for collecting ideas on one platform. ## What it does Myos is a very efficient moodboarding app which lets users attach various files ranging from images, audios, and drawings, all through flexible formats such as drag-and-drop, copy+paste, and uploading local files. Users can organize the files into groups to create a “gallery” of inspiration for your various projects or moodboarding-related needs. Myos adopts a DO NOW, WORRY LATER framework. This means no UI distractions, no templates, and no onboarding pop-ups in your face every few seconds. Myos emphasizes focus, structure, and reduce in clutter through a clean and intuitive UI, letting users get to their work right away. ## How we built it We designed our product on Figma and prototyped it’s key interactions. A ReactJS frontend was made to implement a file upload function and a display gallery. ## Challenges we ran into It was hard for our team to think of an idea for the hackathon, and due to time constraints, the end product wasn’t the plan we went in with. We weren’t ready for the lack of energy that comes with in-person hackathons! That being said, we really tried to hone in on any pain points that we personally deal with to understand how we can make them more efficient. ## Accomplishments that we're proud of Despite our challenges, our team pulled through and managed to submit a project. For many of us, this was also our first in-person hackathon! Numerous skills were gained and put to use, with everyone learning a lot and getting the most out of HTN. ## What we learned Thinking of an idea can be very challenging and it is best to think of an idea beforehand. For some of us, this was the first in-person or the first ever hackathon we attended, and it was a really eye-opening and rewarding experience. ## What's next for Myos Next, we’d finish the functionalities for creating new folders/groups, sorting the content by tags, and saving everything with a database or using localstorage.
## Inspiration We remembered Google's offline dinosaur game and were able to re-discover Google Photos from our childhood in this hackathon. Since the overall theme this year was nostalgia, we thought: why not make a fun memory gallery in this fashion? ## What it does Users can upload photos to the Rexflection software, which utilizes a retro "you are offline" dinosaur character to take users on a journey through their photos. The dinosaur walks through and provides descriptions of each photo in the art gallery. ## How we built it Our stack consisted of React Frontend and Flask Backend. We utilized Google Photos API to obtain the photos necessary. We also tried a YOLOv8 object detection model trained on general everyday object dataset COCO128 to generate object data on each image. Then, we fed the resulting information into an engineered Cohere prompt to generate the resulting art gallery story caption. Our UI was created with just CSS. ## Challenges we ran into A teammate got ill by the end of the first day, so they were unable to attend the rest of the Hackathon. Thus, we had to change our initial idea to accommodate for our capabilities. The majority of us were new to the tech that we worked on, so it took a bit of a learning curve at the start. Figuring out which tools would lead to the best implementation also took a while. We also ran into challenges properly implementing APIs and integrating the frontend and backend due to OS and CORS issues. At the end of the day, we overcame all of these challenges and are proud of our product. ## Accomplishments that we're proud of All the 36 hours put into the completion of our project :D It was tiring, but fun, we liked the process of working on building our idea into reality. It felt extremely rewarding to use new technologies and look back on what we learned throughout. It also looks really cute! ## What we learned We came with varying levels of experience, yet all of us were able to learn something new. Angela delved into the usage of Google Photos and OAuth, as well as the capabilities of pure CSS in UI design. Allison learned about APIs and how to test them with Postman, along with prompt engineering with Cohere. Charlie was introduced to using and training ML models, along with discovering the various resources and capabilities of ML models. All of our learnings were with the great assistance of the many mentors at UofTHacks, as well as the wonderful sponsors we had the chance to talk to. ## What's next for Rexflection The next steps for Rexflection include deployment, adding more features, enhancing descriptions and statistics, and improving various elements of the project. It could also move towards the game aspect, where it traverses through all your photos in the classic style of the game.
## Inspiration We wanted to watch videos together as a social activity during the pandemic but were unable as many platforms were hard to use or unstable. Therefore our team decided to create a platform that met our needs in terms of usability and functionality. ## What it does Bubbles allow people to create viewing rooms and invite their rooms to watch synchronized videos. Viewers no longer have to conduct countdowns on their play buttons or have a separate chat while watching! ## How I built it Our web app uses React for the frontend and interfaces with our Node.js REST API in the backend. The core of our app uses the Solace PubSub+ and the MQTT protocol to coordinate events such as video play/pause, a new member joining your bubble, and instant text messaging. Users broadcast events with the topic being their bubble room code, so only other people in the same room receive these events. ## Challenges I ran into * Collaborating in a remote environment: we were in a discord call for the whole weekend, using Miro for whiteboarding, etc. * Our teammates came from various technical backgrounds, and we all had a chance to learn a lot about the technologies we used and how to build a web app * The project was so much fun we forgot to sleep and hacking was more difficult the next day ## Accomplishments that I'm proud of The majority of our team was inexperienced in the technologies used. Our team is very proud that our product was challenging and was completed by the end of the hackathon. ## What I learned We learned how to build a web app ground up, beginning with the design in Figma to the final deployment to GitHub and Heroku. We learned about React component lifecycle, asynchronous operation, React routers with a NodeJS backend, and how to interface all of this with a Solace PubSub+ Event Broker. But also, we learned how to collaborate and be productive together while still having a blast ## What's next for Bubbles We will develop additional features regarding Bubbles. This includes seek syncing, user join messages and custom privilege for individuals.
losing
# Inspiration When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system. # What it does Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form. # How we built it The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable. # Challenges we ran into We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging. # Accomplishments we are proud of Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds. # What we learned We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles. # Whats next for Istable We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates.
## Inspiration More than **2 million** people in the United States are affected by diseases such as ALS, brain or spinal cord injuries, cerebral palsy, muscular dystrophy, multiple sclerosis, and numerous other diseases that impair muscle control. Many of these people are confined to their wheelchairs, some may be lucky enough to be able to control their movement using a joystick. However, there are still many who cannot use a joystick, eye tracking systems, or head movement-based systems. Therefore, a brain-controlled wheelchair can solve this issue and provide freedom of movement for individuals with physical disabilities. ## What it does BrainChair is a neurally controlled headpiece that can control the movement of a motorized wheelchair. There is no using the attached joystick, just simply think of the wheelchair movement and the wheelchair does the rest! ## How we built it The brain-controlled wheelchair allows the user to control a wheelchair solely using an OpenBCI headset. The headset is an Electroencephalography (EEG) device that allows us to read brain signal data that comes from neurons firing in our brain. When we think of specific movements we would like to do, those specific neurons in our brain will fire. We can collect this EEG data through the Brainflow API in Python, which easily allows us to stream, filter, preprocess the data, and then finally pass it into a classifier. The control signal from the classifier is sent through WiFi to a Raspberry Pi which controls the movement of the wheelchair. In our case, since we didn’t have a motorized wheelchair on hand, we used an RC car as a replacement. We simply hacked together some transistors onto the remote which connects to the Raspberry Pi. ## Challenges we ran into * Obtaining clean data for training the neural net took some time. We needed to apply signal processing methods to obtain the data * Finding the RC car was difficult since most stores didn’t have it and were closed. Since the RC car was cheap, its components had to be adapted in order to place hardware pieces. * Working remotely made designing and working together challenging. Each group member worked on independent sections. ## Accomplishments that we're proud of The most rewarding aspect of the software is that all the components front the OpenBCI headset to the raspberry-pi were effectively communicating with each other ## What we learned One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective disciplines (computer science, mechatronics engineering, mechanical engineering, and electrical engineering). ## What's next for Brainchair To improve BrainChair in future iterations we would like to: Optimize the circuitry to use low power so that the battery lasts months instead of hours. We aim to make the OpenBCI headset not visible by camouflaging it under hair or clothing.
## Inspiration We were inspired to learn Arduino and get familiar with microcontrollers and electronics. Some members have a goal of doing embedded programming, and we agreed this would be a great start before moving on to more complex architectures like STM32. At first we wanted to build a quadcopter, but we decided to scale it down a bit to a helicopter to get used to flight controls first. ## What it does The machine is able to communicate - via the bluetooth module, with any suitable bluetooth device connected to the ArduDroid app - an app that allows easy interface between arduino bluetooth devices and the arduino. Through the ArduDroid app, the user will be able to power on the machine, and have it go vertically up or down with the app’s controls. The user will also be able to power on the tail motor to rotate the device in mid air. The machine also has an accelerometer sensor which allows for the user to get real time data about the acceleration that the machine experiences. ## How we built it Before any work was done on the machine itself, our group first met up and brainstormed the functionality that we wanted to achieve for this model. We then created a BOM of the various sensors and other equipment that we would need for the project. After completing the BOM, we then designed a frame that could accommodate for the various pieces of hardware that we had chosen to use. Due to our intentional material limitation of making our frame out of common materials, the frame design incorporated simple shapes that can be easily manufactured. We then moved on to writing the control software for the machine. By applying our previous knowledge of Arduino programing, and researching into AVR architecture, we were able to create the necessary software needed to interface and effectively control our machine. (More details about our software can also be found in the “What we learned” section). Finally, with all preliminary designs and software finished, we were able to physically construct our frame, attach our hardware on the frame, and electronically connect all essential electrical components. ## Challenges we ran into A few challenges that we ran into while completing the project include the following: On the software side, although our group had some experience working with the Arduino IDE, we however, did not have any experience working with specific sensors, such as the bluetooth module, or the accelerometer. The hardware issue was also a small issue as we had to order various parts that had to arrive within the window of this event. This slightly reduced the hardware that was available for us to use. Furthermore, due to external circumstances, we were unable to use the Arduino UNO R3 which we originally intended to use for the controller board for our machine. ## Accomplishments that we're proud of Some accomplishments we are proud of include establishing communication between the laptop and the bluetooth module (early test before moving to the ArduDroid app, which we did not get fully working). By treating the bluetooth module like another COM port, we were able to send keystroke data to the Arduino Uno to increase and decrease motor speed via the ESCs. In the same vein, we were able to successfully establish communication with the IMU after several failed attempts, and via formulae were able to process accelerometer and gyroscope data to find roll, pitch, and yaw data. ## What we learned We learned a lot about the AVR architecture and how to make use of an Arduino Uno’s pin configuration to meet certain design needs. For example, we needed to reserve certain analog pins for the inertial measurement unit (IMU) we used in our project for its data and clock lines. We also learned more about electronics speed controllers (ESCs) and how to control them via pulse width modulation (PWM). In general, we gained insight on incoming and outgoing board communication with components like bluetooth modules, ESCs, and IMUs, as well as methods of data transfer like serial communication and the I2C protocol. More importantly, we learned how to better design under material constraints, given that we wanted to honour our initial budget. ## What's next for Flying Garbage Machine * Upgrading the chassis. Currently the chassis is not 3D-printed, and the CAD to assembly transition did not fully keep the integrity of the original design. * Finding better propellers. Right now, we had to chip the tail propeller to make it smaller than the top propeller. * Seating the top propeller on a tiltable sheet. This way, the helicopter can actually move forward, backward, and lean in the direction of intended movement due to an imbalance in lift. * Designing a custom control app, and maybe swapping the bluetooth module for a more powerful and faster means of communication.
winning
# 4Course: Revolutionizing Collaborative Learning 🏆 #### Inspiration Empowering students to excel together and innovate in the realm of education. #### What it does 🚀 4Course facilitates seamless collaboration among students, allowing them to share Google Docs links for class materials. Additionally, it offers a unique feature where students can collaborate on homework assignments for free. Using cutting-edge technology like Flow, ownership of projects can be transferred among classmates securely. Blockchain technology ensures the integrity of shared homework, preventing plagiarism by securing ownership. #### How we built it 💻 Utilizing Next.js for the frontend, Firebase for messaging, Cadence for smart contracts, and JavaScript for functionality. We leveraged AWS EC2 for initial messaging but transitioned to Firebase for scalability and efficiency. #### Challenges we ran into 🛠️ Integrating various technologies seamlessly, ensuring robust security measures, and optimizing performance were key challenges we encountered. #### Accomplishments that we're proud of Creating a platform that fosters collaboration, innovation, and academic integrity. Implementing cutting-edge technologies to deliver a user-friendly and secure experience. #### What we learned We gained valuable insights into integrating blockchain for security, optimizing frontend performance with Next.js, and effectively managing collaborative projects in a digital environment. #### What's next for 4School 📈 Continued enhancements to user experience, integration of additional collaboration tools, expansion of platform features, and partnerships with educational institutions to promote collaborative learning on a larger scale.
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
# Course Connection ## Inspiration College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life. ## What it does Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students. From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data. ## High-Level Tech Stack Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing). ## How we built it ### Initial Setup Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization. ### Backend Infrastructure We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes. We distributed our workload across several servers. We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end. ### Graph Construction Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it. Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1. We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken. With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet! ## Challenges we ran into We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers. In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph. ## Accomplishments that we're proud of We’re very proud of the graph component both in its data structure and in its visual representation. ## What we learned It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel. ## What's next for Course Connections There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
partial
# Flare ## A mobile app that helps students stay safe by mapping out crimes nearby campuses. ### Inspiration As a team of USC students, we were shocked when we arrived at TreeHacks to see hundreds of hackers’ laptops and backpacks just lying around Huang unattended with no security guards checking the doors. Thankfully, the surrounding Stanford community does not tend to cause any issues, but things may have played out differently in another city. University Park, the home of USC, has one of the highest property crime rates in all of Los Angeles, ranking 8th out of 209 neighborhoods. It is nearly impossible to walk more than a block around campus without spotting the lonely remnants of a stolen bike. This pattern repeats not only around USC, but at the home schools of countless other hackers. New college students, as well as other groups unfamiliar with the area (i.e. tourists, visitors, expats), represent some of the most vulnerable groups to theft and robberies. Becoming the victim does not only stop at losing your belongings--sexual assault is notoriously known to be a serious problem within college campuses. Our hack aims to help people in these situations, as well as provide valuable information to share with those who serve to protect us. ### What Flare does Flare is a mobile app that helps students stay safe within their communities by mapping out crimes around their college campus. Crime pockets can arise within specific areas due to geographical factors (such as location, lighting, police patrol schedules), and Flare helps students identify these areas. Flare shows students that sometimes it’s not always worth it to take a shortcut through that alleyway, or to leave your car a block away for free parking. After all, it only takes one bad incident to change the course of your entire life. Currently, Flare provides crime mapping services to four college campuses in the Los Angeles area: USC, UCLA, CSUN, and CSULA. Although other crime mapping solutions exist, they fall into one of two categories: **1)** [report-by-report crime maps](https://www.crimereports.com/), with informative, yet extremely cluttered interfaces), or **2)** neighborhood-generalizing maps used to [inform real estate investors](https://www.trulia.com/real_estate/Menlo_Park-California/crime/) or [encourage sales of home security systems](https://www.adt.com/crime). Flare combines the best of both worlds, intuitive design cues to present data meaningfully without sacrificing transparency. ### How we built it Flare was built as an iOS app using Xcode and Swift. We used [Esri’s ArcGIS Runtime SDK for iOS](https://developers.arcgis.com/ios/latest/) to display the maps of college campus neighborhoods. In order to get our crime statistics (time, location, description, etc.) we used the [LA Times’ crime maps](http://maps.latimes.com/crime/). We had to do some hacking and front-end processing magic in order to use their data, because not only is their API not public, their JSON isn’t returned as a string but rather a JS object encoded as a string. This meant that no JSON parser could parse their json file, because it wasn’t actually a legitimate JSON object. To get around that we did some magic with Swift’s JSContext class by hard coding a js function, calling it using Swift’s JS engine and then getting the object returned as an array of Any objects. We then had to map this to a JSON array and then from there decode them into our Crime models. After we were able to get the data from LA times, the rest was fairly straightforward. We found the neighbourhoods containing and surrounding the schools we chose for this demo -- USC, UCLA, CSULA, CSUN -- and then loaded those crime statistics on launch. We also used Apple’s location services to detect the real-time threat level based of the user using crime statistics of that month in order to convey to the user what type of danger they are in or if they should be worried at all. ### Challenges we ran into Due to all of the different policing agencies at play and lack of publicly available datasets, we initially had trouble finding data to work with. Our first idea was to use incidence reports sent via. email from campus police to students, however, the low volume of data points and inconsistencies within the data made it a poor choice. Fortunately, we were able to stumble upon the the Los Angeles Times’ *[Mapping L.A. API](http://maps.latimes.com/about)*, which sources directly from the [LAPD](http://www.lapdonline.org/) and the [LA County Sheriff’s Department](http://www.lasd.org/). Although this limited the scope of our project to just Los Angeles County, the high quality of the data and ease of use (after jumping over some hurdles) made it a no-brainer to use in our hack. We also ran into minor technical challenges, such as having to use a JS-specific method in Swift. However, thanks to the great online community at Stack Overflow we were able to find solutions pretty quickly. ### Accomplishments that we’re proud of We’re very proud of the quality of our app given how quickly we had to build it out. We were able to build an almost full-featured app with a fleshed out design that not only has a great UI/UX but also is very useful for students across the US, all in less than a day. ### What’s next for Flare * Collaboration with campus security agencies * Expansion into non-college markets, other cities in California, and beyond * Normalize data from all different databases, so we can offer this product/service to every campus ### Built with Swift, ArcGIS iOS SDK, Reactive Cocoa, Alamofire, JavaScript, MapKit, CocoaPods, Carthage, Xcode
## 📞 The problem at hand. How might we leverage an alert agent to ensure **24/7 student safety** from school shootings? ## 🗝️ Where did this stem from? **School shootings. There are 2 per week**…and this number is NOT decreasing. Students are now worried about their safety but have no way to indicate potential danger. Just two days ago, a suspected shootout at UPenn left witnesses terrified, as suspects made sounds resembling gunshots while exiting. This ongoing sense of fear and uncertainty is unacceptable, especially when lives are at risk. ## 💻 What does Watchful.AI do? 1. **Instant alert** of weapons and suspicious activity, with the versatility and intelligence of GPT-4o 2. Search for and track suspicious activity through video semantic search 3. Monitor multiple video feeds and visualize position of threats in real time 4. **Alert local security** to resolve suspicious activity ## How does it work? GPT-4o is used in harmony with its distant cousin, CLIP, to progressively distill gigabytes of simultaneous video footage into key anomalies and threat events, which can be efficiently actioned on by a human in the loop. Utilizing CLIP also has the added benefit of enabling semantic search over CCTV footage, so officers can spend less time scrolling through reels of video footage. ## 🛠️ How we built it **UI/UX Design** We used Figma and FigJam to lay out the user flow, using research from notable newspaper stations like The Daily Pennsylvanian and CNN to support our visual designs and design system with data. **Frontend** Next.js, ShadCN, Tailwind CSS, MappedIn **Backend** CLIP (PyTorch + Mac M2), GPT-4o, ChromaDB, FastAPI, OpenCV ## 🕵🏼 Challenges we ran into A challenge we ran into was collecting data and mapping out the Penn Engineering Building manually. Many tools we wanted to use were unavailable, but we ended up contacting Mappedin and learning their software to create this crucial component of our product Another significant challenge was utilizing multithreading to processes multiple video streams at a time on a single device. The backend has to ingest, embed, analyze, and serve huge amounts of data. ## ✔️ Accomplishments that we're proud of We are proud to have built an impactful and well flushed out product in 36 hours! We were able to combine the knowledge of both our frontend team and backend team to work to our strengths and produce a fully functional tool that can be used to prevent an urgent and widespread issue – school shootings. We were also proud to implement a new technique for cheap but accurate inference on large amounts of video data. Obviously calling GPT-4 for every frame of multiple video streams would be highly cumbersome, but by prescreening with embeddings (which run for free on local GPUs), then checking with GPT-4o, we get the accessibility of on-device AI with the power of huge hosted models. ## ☕️ What we learned We learned that many students are anxious about school shootings in the US considering there’s an average of 2 shootings per week. Surprisingly, there aren’t any publicly available tools to alert citizens about potential shootings accurately, which is quite concerning. We also learned that we are able to create that tool we wished existed to ensure our safety. Learning how to use MappedIn was something new were discovered to do so. ## ⏭ What's next for Watchful.AI We strive to implement Watchful.AI in every security system in the US to watch for potential shootouts and suspicious activity. We want to further develop the accuracy of our cameras and description search to ensure safety of not just students, but all citizens in the US.
## Inspiration The beauty of a hackathon revolves around creating something uniquely powerful and innovative. But a whole segment of the population disproportionately misses out on this experience. This isn't because they don't want to, or because they are uninterested, but because they can't or, even worse, we don't allow them to. I am referring to the elderly who, despite the continuous cycle of invention we live in today, find themselves getting increasingly left behind. The older you get, the harder it is to find employment if you were to lose your job because of the steep, technologic learning curve. As we as people get older, our vision worsens, our motor function becomes limited, and our memory starts to fragment. Technology can solve these problems, and many amazing developers and teams try, but sometimes there isn't enough attention dedicated to this issue. This is why we wanted to focus on the elderly, and this is why we decided to make our project about their healthcare. ## What it does Our app, **PictoPill**, identifies prescription drugs through the barcode on the bottle/box and, with the information that comes with it, becomes a tool to aid older people with their medication. Once a label is scanned, an automatic dosage schedule is created with notifications to remind the user when to take their daily prescription. A simple UI easily lets the user know what medications they are on and the schedules for them. ## How we built it **PictoPill** is an Android app built with Firebase and AR Core In order to do so, we researched and implemented solutions in multiple different APIs, including: -DailyMed (Drug Code Lookup) -Good RX -Room Persistent Database (Local Data Storage) -Firebase ML Kit (Image Processing) -Camera2 (Better Runtime and Memory Usage) -Google AR Core (AR) ## Challenges we ran into Our workflow and version control was the greatest challenge that we faced during the development of this hack. Since one of our team members had never used git before, we helped him set it up, but not before he began working in a project directory different from that of our repository. Since he was also the one with the most android experience out of all of us, this meant that we ended up with two halves of an app that needed to be adapted to work with each other. This added substantial time and difficulty to our project which could have been avoided. Working with AR and databases proved to be a lot more challenging than expected. We know this is a lot of reading but some good news next: ## Accomplishments that we're proud of Victories are personal. So, in order of first names, here is our team to explain what we're individually proud of accomplishing. *Cesar*: This is my first hackathon. I came in with no knowledge of coding, without any friends, without a team, and without an idea. After a little more than a day, I have all four, including a rudimentary knowledge of .xml and Android App development, and a personal mission. *Karan*: Being able to develop a full backend and working persistent database system from one very undocumented set of information. Downloading and processing data in Dailymed was a time-consuming process and being able to learn how to create a room database backend would be something that made the results much cleaner. *Nitin*: Managed to understand and apply complex API's to create a powerful project. Responsible for setting up the ArCore, Camera, and Firebase Vision API. Learned how to enable these API's to interact with each other smoothly even though the interfaces were originally not configured to interact with each other. Other back-end work included setting up methods to pass image sensor data straight to Firebase so that it could be parsed and then projected utilizing ArCore. *Robert*: Laying out the foundation for the database and trying to help make the two parts of our project interface in a meaningful way. ## What we learned We learned how to work with Android Studio in app development and we gained practical experience with databases and datasets. We also gained practical knowledge with integrating APIs and learned of the complexities that come with working with Augmented Reality. On a more technical sense, we experimented a lot with software and languages that were new to some of us like Firebase and Google AR Core. ## What's next for PictoPill Aside from fully developing the augmented reality and scheduling capabilities, an important part of this app is accessibility, so we chose a color scheme that was AAA and AA compliant. However, we believe we can do better, so through user testing, we hope to create a set of themes were the user can select their preferred colors, all of which have a good contrast ratio to make using the app easier for the visually impaired. Along those lines, we want to have options for people with different forms of color blindness, like protanopia, deuteranopia, and more. Finally, we want to fully leverage the set of features that are already available for app developers, like options to adjust font sizes and spoken narration, in a way that still makes the app easy to use. We have some other ideas that go beyond the main scope of the app, but these ideas completely depend on the feedback from the community we are trying to help. If we have the potential to truly help those with accessibility needs, then that is enough of a reason to keep moving forward and thinking bigger. We may also change our name :)
partial
## What it does The app loads virtual pellets and ghosts in a small area around the Kaiser building, which a player can collect by running around with their phone. ## How we built it We used the Google Maps API for Android inside an Android application, using the GPS sensors for the player's location data. ## Accomplishments that we're proud of Since we can't directly pull the walkway paths from Google Maps API, we traced the paths with Google My Maps then exported them. We wrote a small application to format the exported data as Java code to copy into the program. ## What we learned We were all inexperienced with Android development and learned as we went to put together the app.
## Inspiration As students, we know all too well the struggle of having to study for our classes by tackling a seemingly never-ending mountain of textbooks and other course notes. This studying method would frequently lead to information going through one ear and out the next, making it detrimental for unfortunate students. Textbooks and course notes are also unfriendly to those with are visually impaired, giving them a strong disadvantage in their education. At Hack the 6ix, our team recognized the need for a tool to help make knowledge more accessible and engaging, so we developed **QuizCaster**. ## What it Does **QuizCaster** is an innovative and intelligent personal quiz assistant that transforms static sources like PDFs, links, and YouTube videos into dynamic learning experiences. Our platform utilizes the cutting-edge capabilities of GPT-4 to generate concise summaries of the provided content, condensing the essential information into a format that is easy to comprehend. Additionally, the powerful GPT-4 engine is used to craft relevant and thought-provoking quiz questions based on the content provided. This allows the user to not only review the source contents but truly learn it through the interactive quiz. Each source also has its task ID, allowing users to re-review information they have entered before, or share them with their friends. However, what sets **QuizCaster** apart is its commitment to accessibility. We have incorporated voice commands and other audio features to cater to visually impaired users, enabling them to interact with the platform effortlessly. This integration ensures that learning is not limited by traditional barriers, making knowledge accessible to a wider audience. ## How we built it We employed GPT-4's advanced natural language processing capabilities to generate both summaries and quiz questions. The integration of voice commands and audio input was achieved using Microsoft Azure's speech synthesis service and OpenAI's Whisper model. The user interface was created using React and was designed to be intuitive and user-friendly, ensuring a seamless experience for all users. For the technical implementation, we used a mix of programming languages, including Python for the backend and JavaScript for the front end. We used web scraping techniques to extract content from URLs, and PDF parsing libraries to extract data from PDFs. ## Challenges we ran into One challenge we encountered while developing **QuizCaster** was during the extraction and processing of information. It was difficult preparing the text to be well-readable without having to wait for a long while. To fix this issue, we developed our own **Naïve Bayes classifier** model that would scan a raw text and provide punctuation to clean it up. The model was trained on over 7.8 million sentences from Wikipedia, which allowed us to make the model very accurate. ## Accomplishments that we're proud of As a team, we worked together and focused well to complete the development of **QuizCaster**. By dividing the project among us, we completed our goals efficiently and on time. We are immensely proud of what we've achieved with **QuizCaster** within the timeframe of Hack the 6ix. ## What we learned We gained valuable insights into speech and audio synthesis, machine learning models, accessibility considerations, and the intricacies of training AI models like GPT-4. ## What's next for QuizCaster * Support for more information sources (photos, ebooks, etc.) * More accessibility features * Other options for the quiz, such as flashcards
## Inspiration The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night. ## What it does By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety. ## How we built it There were three main tasks in our build. 1) Frontend: Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density. 2) Backend: We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python. 3) Data Processing: After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating. ## Challenges we ran into: 1) Frontend/Backend Connection: Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API. 2) React Framework None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine. 3) Data Processing Algorithms It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach. ## Accomplishments that we are proud of Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end. Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights. ## What we learned For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs. ## What's next for safewalk.io In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as: Crime rate Pedestrian Accident rate Traffic density Road type
losing
## Inspiration Customer reward loyalty programs are becoming more and more centralized, with reduced use of paper coupons that were able to be traded among people and the creation of more app-based claiming platforms (think Starbucks, Chipotle, Snackpass). In addition, with the pandemic, customers and the businesses they visit are losing out on experiences and sales, respectfully. Bazaar aims to bridge this gap by empowering customers and businesses. ## What it does * Basics The platform itself is hosted on the Terra blockchain, using reward "tokens" (Bazaars, BAZ) that are minted at 1% of any user purchase from a local business using their Terra wallet. At certain timeframes or purchases at a business, users may also receive a redeemable coupon for that specific business. Users are able to redeem Bazaar tokens for coupons, as well. Using GPS + a shared QR code, the more people purchasing from a company at the same time in a group, the more BAZ will be minted and distributed among the group. * Walkthrough of Minting: * User links wallet in app * User buys item from store with UST * UST sent through our contract * Contract mints BAZ to User’s wallet based on how much they spent * Contract passes UST to restaurant’s wallet * User-Powered Marketplace Users are able to sell coupons for Bazaar tokens to other users through the Marketplace. This allows for coupons to still be redeemed at business, bringing traffic and preventing loss of sales. ## How we built it Our UI is through Android Studio + Java. The tokens are minted and executed through a smart contract on the Terra blockchain using their Python SDK. ## Challenges we ran into * Figuring out how to use Terra, which SDK to use (python versus javascript), learning how to write and interpret smart contracts/tokens in Rust. * Combining the python backend with the android studio (by creating a Flask server and using Volley/Retrofit to pass HTTP requests between the two). ## Challenges we ran into * Figuring out how to use Terra, which SDK to use (python versus javascript), learning how to write and interpret smart contracts/tokens in Rust. * Combining the python backend with the android studio (by creating a Flask server and using Volley/Retrofit to pass HTTP requests between the two). ## Accomplishments that we're proud of We are proud of learning how to develop Android apps without having any prior experience. We are also extremely proud of minting our own token and creating a smart contract on the Terra Blockchain which is a very newer platform with limited documentation. ## What we learned * Learning a new SDK within a couple hours is pretty difficult * Proper documentation is essential when using a new framework. If the documentation does not detail every step needed it is very hard to reproduce. ## What's next for Bazaar
## Inspiration When the word “police” is typed into the Google search bar, the third suggestion to come up is police brutality. While tragic, this is unsurprising to most of us. After seeing a few hashtags on Twitter, we became numb to the pain caused by the disconnect between civilians and the police. It’s worth noting that police brutality is a phrase burned into our brains, but driver uncooperativeness during traffic stops is not. The risks associated with traffic stops affect both law enforcement officers and drivers, and misunderstandings have the ability to result in irreversible consequences amidst the tension. As a law enforcement officer, the risk of walking up to a foreign car on the side of a busy road is gargantuan. As a driver, being pulled over is nerve-wracking. It’s foreign and uncomfortable – especially if a driver is unprepared. However, it is also the duty of every driver to be prepared; to be aware of the rules, to be aware of their rights, and to be aware of how to appropriately handle a traffic stop. Enter GoCalmly. We created an application that would encourage drivers to meet law enforcement officers in the middle – to support more fair and respectful exchanges during traffic stops. After some research, we identified that the two user groups most impacted by this app would be teen drivers and young male African American drivers. The statistics on teenage accidents are astonishing, and it is no surprise that newer drivers are less likely to be familiar with road rules and have less knowledge on how to interact with law enforcement in a productive way. The correlation between African American drivers being stopped and searched is exponentially higher than that of any other race in America, and we knew that our app had a massive potential to alleviate tension within America’s black community. ## What it does GoCalmly is a hands-free tool that educates the driver on how to de-escalate the situation in the event of a traffic stop. The program runs through a short dialogue with the user when a traffic stop environment is detected. This event is recognized using the IBM Watson API to establish the presence of police vehicles using computer vision. Further triggers of the traffic stop environment are the distinct frequency of police officer sirens and accelerometer measurements that detect the car slowing down. For the use case of a teen driver, we thought of our own personal experiences as teenage drivers and remembered how our driving emotion was fear in a traffic stop situation. We decided to combat fear with empathy and love. Our application features the ability to pre-record a message from a loved one, such as a parent, to ease the driver through the process of safely executing a traffic stop. If the user chooses not to pre-record a message, then the Watson API uses text-to-speech recognition to give the driver tips encouraging cooperation, deep breaths, and some basic driver’s rights for the law enforcement encounter. Once the vehicle is stopped, the app has stored copies of user documents such as their driver’s license and registration in the event that they are unable to find those identification documents during the traffic stop. While this probably won’t help the driver avoid a citation, it helps the encounter. The application begins recording audio that can be reviewed following the traffic stop as evidence for unlawful treatment or simply to remember the citations issued during the stop. Furthermore, in the case of the driver becoming unable to contact a loved one, the app also features an automated text creator to the predetermined emergency contact so that others are informed of the driver's location and situation. Following the traffic stop, the application uses speech-recognition to pick out keywords from the conversation that point the user to next steps such as paying a citation online, going to traffic school, or filing further legal complaints. ## Challenges we ran into We had no knowledge of the African American driver persona and with the time constraint, had little to no way of attempting to do nearly enough human-to-human research to fully empathize with the community and the main pain points. Since so much of the tension and trauma within this specific use case is deeply emotionally driven- we knew actually talking to people was the only way to determine every single pain point and that articles and statistics could only get us so far. ## Accomplishments that we're proud of We’re very happy that we’re bringing awareness to police brutality in a less biased light. The simple fix of drivers being informed and cooperating would make that many traffic stops smoother and less scary and the days of that many law enforcement officers a little safer. We’re proud of the integration of many different features and APIs, and we are excited by the brand we were able to create through the UI/UX. ## What we learned Ways to improve our own safety in the event of traffic stops. By researching the traffic stop from the point-of-view of police officers as well as new drivers, we could better understand the needs of our user group. Through our research, we also unearthed the most comprehensive list of reminders and rights for drivers to know when in a traffic stop and determined five different “next steps” paths for a teen driver after they have been through a traffic stop. This was so enlightening for our team because we learned several driver’s rights we were previously unaware of and helpful behaviors that we will now aim to implement if in this position in the future. We hope our application can have the same eye-opening effect on others. ## What's next for GoCalmly How might we create an insightful user persona for young African American males? How might we create an inexpensive and user-friendly hands-free device that could be integrated into the car’s dashboard?
## Inspiration Small businesses have suffered throughout the COVID-19 pandemic. In order to help them get back on track once life comes back to normal, this app can attract new and loyal customers alike to the restaurant. ## What it does Businesses can sign up and host their restaurant online, where users can search them up, follow them, and scroll around and see their items. Owners can also offer virtual coupons to attract more customers, or users can buy each other food vouchers that can be redeemed next time they visit the store. ## How we built it The webapp was built using Flask and Google's Firebase for the backend development. Multiple flask modules were used, such as flask\_login, flask\_bcrypt, pyrebase, and more. HTML/CSS with Jinja2 and Bootstrap were used for the View (structuring of the code followed an MVC model). ## Challenges we ran into -Restructuring of the project: Sometime during Saturday, we had to restructure the whole project because we ran into a circular dependency, so the whole structure of the code changed making us learn the new way of deploying our code -Many 'NoneType Object is not subscriptable', and not attributable errors Getting data from our Firebase realtime database proved to be quite difficult at times, because there were many branches, and each time we would try to retrieve values we ran into the risk of getting this error. Depending on the type of user, the structure of the database changes but the users are similarly related (Business inherits from Users), so sometimes during login/registration the user type wouldn't be known properly leading to NoneType object errors. -Having pages different for each type of user This was not as much of a challenge as the other two, thanks to the help of Jinja2. However, due to the different pages for different users, sometimes the errors would return (like names returning as None, because the user types would be different). ## Accomplishments that we're proud of -Having a functional search and login/registration system -Implementing encryption with user passwords -Implementing dynamic URLs that would show different pages based on Business/User type -Allowing businesses to add items on their menu, and uploading them to Firebase -Fully incorporating our data and object structures in Firebase, ## What we learned Every accomplishment is something we have learned. These are things we haven't implemented before in our projects. We learned how to use Firebase with Python, and how to use Flask with all of its other mini modules. ## What's next for foodtalk Due to time constraints, we still have to implement businesses being able to post their own posts. The coupon voucher and gift receipt system have yet to be implemented, and there could be more customization for users and businesses to put on their profile, like profile pictures and biographies.
partial
## Inspiration There’s two fundamental issues we want to address: a) Research is not generally accessible to the public. Most people don’t want to spend hours trying to decipher jargon and figure out why a paper is important. b) Presenting research and making it accessible is a very tedious task. Authors have to compile their research into a presentation, figure out what key points to highlight, and what to present to make it understandable to those without industry knowledge. Plus – no one wants an ugly presentation, so significant time is spent on design too. Put together, this means that researchers spend a lot of time building presentations (our empirical survey found researchers spent ~5.8 hours on average building presentations) if they want to be able to present them to the general public. Yet, research is incredibly valuable in driving forward innovation, ensuring people understand what is happening in society, and helping inspire and educate students currently in school who will become further scientists and leaders. It's not enough for research to simply live in the bubble of academia -- the wider public (which are all impacted by research findings) need to be engaged, and there needs to be a easier way of doing that. ## What it does The current ways that people create slides: 1. Slidesgo / SlidesCarnival: Tools like Slidesgo and SlidesCarnival only provide templates, rather than content creation. It is very time consuming to add content and design a presentation. 2. Tome.ai / Gamma: These existing AI-powered slide generation tools do not have dynamic content creation and placement. These tools produced slides that look fragmented and do not flow as a full presentation. And neither are suitable for research presentations. Our system takes an URL to the PDF of the research paper, and retreives the research paper in order to summarize and compiled a structured presentation of the key background, contributions, and results of a given study. From there, we organize the data and assign it into logically-ordered slides broken down by purpose and dynamically render visual elements and text onto a Canva presentation. All of this is completed in under 60 seconds, where as most people don’t even have a title slide created during that time! ## How we built it We built this application onto of the Canva SDK in Typescript, using Bun as our runtime to speed up the backend. To process the PDF, we built an API endpoint using Together AI’s hosted Mistral 8x7B model and Python to provide cleaned text. From there, we stitched together multiple layers of LLM agents in order to analyze the research paper, extract the relevant information, convert the relevant information into slides, and organize the slides and content into Canva objects that are properly spaced and positioned. We also utilized Canva’s images library in order to add and implement dynamic styling of elements based on a user selected theme. ## Challenges we ran into Learning to use the Canva SDK was quite challenging, especially since it was recently released so the documentation describing various functionality and nuances were not as extensive. It was also very challenging and time consuming finding the most optimal LLM pipeline setup that ensured that any content we had was: * Relevant, and concise * Accurate and minimzed hallucinations. Finally, figuring out how to best position the content on a slide in a way that can be generalized to all research papers was a major challenge. We attempted to build out a system to generate more advanced / complex elements and layouts on the fly using LLMs, but were not able to complete this in time. ## Accomplishments that we're proud of We’re very proud of the pipeline we built — it's quite robust, and we’re able to parse essentially any research paper from any journal across any domain. This is no small feat, considering that there are libraries built specifically to parse certain journal articles from specific publications. We’re also quite proud of how coherently we were able to get content – it’s output in a very understandable manner that is simple and accurate. Finally, we’re very proud of the interface we built. A big part of our focus was around simplicity – unlike traditional design tools like Figma or Adobe XD, we aren’t catering our tool towards designers, but rather than researchers who frequently have minimal if any design experience. Therefore, we focused on minimize the complexity and failure cases that researcher would potentially face in the process, to make our tool as easy to use as possible. We're excited to be working on a big problem that has to potential to change the way research is being communicated if our solution works -- the "no-code" design market is ~60B USD (Canva's TAM is ~40B). Universities alone spent ~20B USD on software tools. We estimate that the demand for good design tools at research institutions is at least ~3B USD -- given that there's no other tool on the market doing this, we'd be positioned for significant market capture, and that's before we expand beyond just research. ## What we learned We learned a lot about working with Canva’s SDK, which we found quite impressive. We believe there’s a lot of potential for AI to be integrated more tightly into the Canva ecosystem through it's SDK to supercharge how people design presentations and event posters. Furthermore, given that accuracy is very important in research contexts, we learned about techniques to minimize LLM hallucination — reducing the temperature parameter of models, and breaking tasks into smaller subtasks that can be processed iteratively. Finally, through the process, we learned how important it was to effectively communicate and delegate tasks – at first, we had multiple team members working on similar sub-projects because while we were all clear on the components we needed to build to make this project work, there was a bit of miscommunication as to the best way to divide up the necessary components to develop, given a lot of components have significant interdependency. ## What's next for Kanga We plan on applying for the Canva Innovation Fund to further develop this project. We’d also like to refine the various stages of this process. While in 36 hours we managed to build out an arguably-impressive prototype, there’s a lot of room for improvement, especially in terms of designing more presentation-ready papers. Some features we’d lile to implement include: * Extracting relevant diagrams and images from papers and positioning them within slides * Generating icons and diagrams that present various contributions and concepts discussed within the paper using Dalle / SDXL * Building more variety in the positioning of elements and layouts within a slide show * Providing for fine grained control over the final result of the presentation * Expand beyond just presentations into posters and other graphics Given ample time, we’d also like to fine-tune our own LLM models that are catered towards distilling research information, to ensure that we are capturing the most optimal contributions and key points with maximum accuracy. We want to become the default app for Canva users and become part of their daily workflow, and ideally be integrated fully into Canva as a “design-assist” agent and get acquired by Canva in 18 months. We’d use a variety of per-user pricing plans to generate revenue, primarily catering towards researchers and educational institutions, where we’d be able to close high-value contracts and push our product to a large amount of relevant users (stuents, professors, researchers) very quickiy.
## Inspiration Fortnite. While talking with the Radish team, they mentioned wanting to appeal to more of a Gen Z audience, so we began a process of intensive market research whereby we realized that gamification with a familiar, Fortnite-inspired twist is a great strategy for driving youth engagement. ## What it does Buy on Radish and complete weekly Radish Restaurant Challenges to earn XP and LEVEL UP with dozens of UNIQUE REWARDS! ## How we built it NestJS ( :( ) and React. We used TypeORM and Postgres with Docker for the database portion of the backend. ## Challenges we ran into Using NestJS. ## Accomplishments that we're proud of We braved the storm of NestJS. Please don't make the same mistake we made. ## What we learned Not to use NestJS in the future. ## What's next for Radish Battle Pass Radish will be hiring us shortly to implement this feature. We're sure of it!
## Inspiration Our team members had just barely scratched the surface of the research world and we already felt overwhelmed. Reading research is a beast within the beast of research itself. What if, like a new-age Reader’s Digest, we could make reading research both easy for leisure and scalable for research student use? Thinking along the lines of music and the process of learning it. Sure, you start out with a complicated score. But break it into digestible pieces--chords you already know. And soon you’re playing like a whiz. Abstract simplifies academic papers by page and allows the user to adjust the simplicity to lower and lower until they get to the original text. Along the way they read and learn, and soon the most jargon-filled papers are 100% comprehensible for any reader. ## What it does Abstract uses the You.com API to create customizable AI summaries, by-page, for uploaded or in-app research articles. With a sleek student-focused design, reading research is no longer daunting. Researchers can slide the simplicity slider all the way to “simple”, digest, move the slider a tad, and digest the slightly more advanced content. They can repeat this process all the way to the other end of the slider, where the reader now displays the original content of that particular page of research. ## How we built it We used the You.com API and Python to create our AI summarization tool. We use a single API endpoint for summarization. We prototyped the site in Figma and converted to frontend in React+Typescript+Tailwind CSS. ## Challenges we ran into We knew we wanted to make research easier, but we weren't sure if summarizing was enough. It took us some time to put together the idea of simple->original sliding for better comprehension without information/context loss. We also struggled a bit with collaborating on code, because we initially tried to use Repl.it in order to code together but soon realized we wouldn't be able to use the tools and frameworks we had wanted to. We had to collaborate in person by looking over each other's screens instead, but it worked out in the end! ## Accomplishments that we're proud of We all learned a lot! We complimented one another's skills well already, but we each dove into other parts of the stack that we weren't already familiar with as well. We were able to work on lots of part of the project together and we didn't necessarily have to "divide and conquer" all the time. We collaborated well, and are proud of our idea because honestly it helps us with our own learning/research interests too :). ## What we learned * How to design our product in first a sketch mock-up, then Figma, and finally into frontend Typescript React and Tailwind CSS * How a front end framework like React communicates to a back end server * How APIs like You.com's can be utilized alongside Python * Ideation & collaboration * Github! ## What's next for Abstract Abstract’s research paper learning strategy is a new one, something never-before-used. Before there was annotation, re-reading, more annotations, more frustration with complicated academic jargon. AI is what makes this new strategy possible. The process of starting simple and slowly building up to the actual text eliminates the possibility of missing information that ChatGPT summaries can cause for students. It helps build connections for future researchers between jargon and their meanings, creating a learning loop for how to read academic papers better. This is the learning process of the future. We believe that our research-based bite-size learning process is going to be used in schools everywhere. We aim to collaborate with K-12 schools and universities in order to both implement the databases they have access to, and provide our learning platform Abstract so the students can read and digest those papers for both school research and personal learning. AI has never been used this way before for learning, and we hope to eliminate the current pitfalls of students using ChatGPT to learn and serve as a liason between AI and students, letting them harness the benefits that AI brings without the harm. Abstract is the future of AI in education.
partial
## Inspiration Cards Against Humanity is pretty cool, but what if it had gifs? We wanted to create this new take on the classic card game, and thought we could make things interesting by randomizing reaction gifs to create a "hand" that will be used to answer a prompt! ## What it does Players join the room and are dealt a hand of gifs. A player clicks "Start judging" to be the judge for the round. Our central coordinator sends out relevant notifications to signify that the round has started. A prompt then appears on the screen and players choose which gif they will submit. The judge picks his/her favorite submission out of the gifs and the winner is announced! The winner is the next judge and hands are reset :) We added a real-time chat window so you can talk with your friends and receive messages from the server. It also supports HTML tags so you can embed images and stuff. Players can also hover over individual gifs to see interesting descriptions! The Microsoft computer vision API recognizes and labels each gif in your hand, which is usually funny/accurate. ## How we built it Our back end uses NodeJS and Socket.io and our front end is built using Bootstrap. We knew the theme of the hackathon was to do stuff we've never done before, so this is our first exploration into JavaScript and NodeJS. Clients pull gifs from the Giphy API, pass them to the Microsoft computer vision API to label each and then display them to the user. All coordination is done through multicast messaging using sockets between clients and the server. Our server is almost completely stateless, holding only the current list of group members which is passed to new clients to update the group view. ## Challenges we ran into We knew the theme of the hackathon was to work with new technologies, so we decided to learn JavaScript/NodeJS/Meteor. It was a lot harder to work with than we anticipated, especially because we're more familiar with sequential code execution rather than complex, asynchronous socket systems. Also HTML/CSS are super wonky. We couldn't find a way to differentiate the user avatars in the game, so we were looking for ways to make our users look unique. ## Accomplishments that we're proud of We're new to all of this and it actually works! All the coordination between the clients and server were done through multicast messaging which we've only just been exposed to through classes. It was super cool to see real-time chat, sophisticated computer vision and our own version of multicast group management come together to make our idea a reality. We decided to display each user with a unique cat photo that you can enjoy while playing the game :) ## What we learned For JavaScript, its probably better to use a framework such as Angular as opposed to plain coding in JS/HTML/CSS. Also, Meteor is the Rails of JS. Also, async is really weird/unintuitive. ## What's next for GAH We need to make it more pretty and sophisticate the rules and mechanics. First, we're going to try to add animated dragging (like Hearthstone), we're going to add timers for the round, we're going to try to add session saving for groups and we're going to try to diversify our gif pool over time. We hope you all get to play a full version of Gifs Against Humanity one day!
## Inspiration We were extremely inspired by all the computer vision projects circulating around, and decided to use it to make a very fun duelling 1v1 game. ## What it does You can 1v1 against another player! You can move your arm around to dodge incoming bullets and shoot your own bullets at the opponent with different hand movements! ## How we built it We used MediaPipe to recognise the different hand positions, which the sprite will follow. Also, depending on different hand gestures, recognised by the computer vision AI, the sprite will be able to shoot projectiles, from a certain position. This will be sent to a server via WebSocket, in order to send the information to the other player. ## Challenges we ran into We ran into some problems when communicating between frontend and backend with the Python implementation of MediaPipe, as the connection was too slow. To fix this issue, we reimplemented all of the computer vision with MediaPipe back in JavaScript. ## Accomplishments that we're proud of Using computer vision through MediaPipe to track hand movements and taking these coordinates to make the player move. ## What we learned We learned how to implement MediaPipe in both Python and JavaScript. We also learned how to use WebSocket to send information of the bullet to a server, and then to another player.
Bet for Bit is a premium, high-security Bitcoin betting service for dedicated sports fans. It is built with Python, Django, and the CoinBase api. Our platform scrapes live sports stats, and allows users to place bitcoin bets on their sports teams.
losing
**We are bringing people together over their favorite meal!** After working in restaurants for years, we saw how restaurants’ biggest problem is filling empty seats during off-peak hours. Happy Hour is an app that lets customers book a meal during an off peak hour, for a discount on their total bill. It makes the meal cheaper for the consumer, and brings customers and revenue to the restaurant. We will expand this beyond restaurants to all industries that have varying levels of demand. Workout classes, museums, nail studios, anywhere you are paying rent, staff wages, and other fixed costs either way and want to increase demand even if for a discount. We will make eating out cheaper for our customers, and we will bring revenue to our restaurants. Additionally, we will make managing peak times easier for restaurant staff, by redistributing this demand wave with our discount incentive structure. We all got together, started building. None of us had ever coded in React Native before and we just put our heads down to build. We built out the whole frontend focusing on an intuitive and seamless user experience. We have so many features on our to do list moving forward, including having special offers on dishes where produce will expire (pastries at a Brunch spot, fresh vegetables at a farm to table restaurant.) Excited to continue building!
## Inspiration In response to the recent sexual assault cases on campus, we decided that there was a pressing need to create an app that would be a means for people to seek help from those around them, mitigating the bystander effect at the same time. ## What it does Our cross-platform app allows users to send out a distress signal to others within close proximity (up to a five mile radius), and conversely, allows individuals to respond to such SOS calls. Users can include a brief description of their distress signal call, as well as an "Intensity Rating" to describe the enormity of their current situation. ## How we built it We used Django as a server-side framework and hosted it using Heroku. React Native was chosen as the user interface platform due to its cross-platform abilities. We all shared the load of front end and back end development, along with feature spec writing and UX design. ## Challenges we ran into Some of us had no experience working with React Native/Expo, so we ran into quite a few challenges with getting acclimated to the programming language. Additionally, deploying the server-side code onto an actual server, as well as deploying the application bundles as standalone apps on iOS and Android, caused us to spend significant amounts of time to figure out how to deploy everything properly. ## Accomplishments that we're proud of This was the very first hackathon for the two of us (but surely, won’t be the last!). And, as a team, we built a full cross-platform MVP from the ground up in under 36 hours while learning the technologies used to create it. ## What we learned We learned technical skills (React Native), as well as more soft skills (working as a team, coordinating tasks among members, incorporating all of our ideas/brainstorming, etc.). ## What's next for SOSed Up Adding functionality to always send alerts to specific individuals (e.g. family, close friends) is high on the list of immediate things to add.
## Inspiration Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made! ## What it does You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most! But we are not gonna stop here! Our goal is to implement the following in the future for this app: * We can connect the app to delivery systems to get the food for you! * Inform you about the food deals, coupons, and discounts near you ## How we built it ### Back-end We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use. ### iOS Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time. ### Android The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible. ## Challenges we ran into ### Back-end * Finding APIs to get menu items is really hard at least for Canada. * An unknown API kept continuously pinging our server and used up a lot of our bandwith ### iOS * First time using OAuth and Firebase * Creating Tutorial page ### Android * Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge * Designing Firebase schema and generating structure for our API calls was very important ## Accomplishments that we're proud of **A solid app for both Android and iOS that WORKS!** ### Back-end * Dedicated server (VPS) on DigitalOcean! ### iOS * Cool looking iOS animations and real time data update * Nicely working location features * Getting latest data from server ## What we learned ### Back-end * How to use Docker * How to setup VPS * How to use nginx ### iOS * How to use Firebase * How to OAuth works ### Android * How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout * Learned how to optimize applications when communicating with several different servers at once ## What's next for How Much * If we get a chance we all wanted to work on it and hopefully publish the app. * We were thinking to make it open source so everyone can contribute to the app.
losing
## Inspiration When struggling to learn HTML, and basic web development. The tools provided by browsers like google chrome were hidden making hard to learn of its existence. As an avid gamer, we thought that it would be a great idea to create a game involving the inspect element tool provided by browsers so that more people could learn of this nifty feature, and start their own hacks. ## What it does The project is a series of small puzzle games that are reliant on the user to modify the webpage DOM to be able to complete. When the user reaches the objective, they are automatically redirected to the next puzzle to solve. ## How we built it We used a game engine called craftyjs to run the game as DOM elements. These elements could be deleted and an event would be triggered so that we could handle any DOM changes. ## Challenges we ran into Catching DOM changes from inspect element is incredibly difficult. Working with craftyjs which is in version 0.7.1 and not released therefore some built-ins e.g. collision detection, is not fully supported. Handling various events such as adding and deleting elements instead of recursively creating a ton of things recursively. ## Accomplishments that we're proud of EVERYTHING ## What we learned Javascript was not designed to run as a game engine with DOM elements, and modifying anything has been a struggle. We learned that canvases are black boxes and are impossible to interact with through DOM manipulation. ## What's next for We haven't thought that far yet You give us too much credit. But we have thought that far. We would love to do more with the inspect element tool, and in the future, if we could get support from one of the major browsers, we would love to add more puzzles based on tools provided by the inspect element option.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration We began this project with the Datto challenge in mind: what's the coolest thing we can do in under 4kb? It had to be something self contained, efficient, and optimized. Graphics came to mind at first, specifically for a game of some sort, but doing that requires some pretty bulky libraries being tossed and `#include`d around. But there was one place where we got that all for free - and that's the browser! ## What it does LinkBreaker looks through the DOM of the current page you're on and picks up the first 50 `<a>` tags it sees. It then uses CSS animations to morph these into randomly coloured bricks at the top of the page. Paddle and ball are spawned right after that, and then a good old fashioned game of Brick Breaker begins! Don't worry, those anchor tags aren't actually gone. Simply hit `esc` and your page is back to normal. ## How I built it When building a project with space in mind, you really have to make sure that everything is as self contained as possible. We started out by building a script to do all the animations and the game logic separately, and slowly merged them together into the chrome extension architecture. Once we verified that everything was there and working, the uncompressed source was ~10kb...well over our limit. We optimized where possible, and minified every single file of all whitespace and extraneous characters, using [UglifyJS](https://github.com/mishoo/UglifyJS). ## Challenges I ran into The main crux of our project was that, since it was manipulating the DOM, it used jQuery heavily. However, in order to include jQuery into a chrome extension, it's source needs to be added as a user script. This means that I'll have to lug around a 400kb+ file wherever I went. And of course that wouldn't do. But since chrome extensions are prevented from injecting scripts (and other tags) from the CSP, I had to get a bit creative. What I ended up doing was making an ajax GET call to the cdn hosting the minified jquery code. The response was simply a single string, and that string was all of the jQuery code, wrapped up in a closure. So I did what any sane person would do... and ran `eval()` on that string. *It worked.* ## Accomplishments that I'm proud of Once I finally minified all the source and uploaded the extension for testing, it was time to pack it up. Chrome created the `.crx` executable, and I nervously navigated to my project directory to check it's file size. *3,665 bytes* Now that's a pretty close call. And the fact that I got so close really made me happy. It made me happy because not only was I within the bounds of the challenge, but I successfully picked something complex enough to approach the upper bound without hitting it. I thought that was really cool, and probably what I'm proud about the most overall. ## What I learned Javascript is crazy, yo. But besides that, I think being conscious of size really had a lasting impression on me. Looking back at past projects, I've always just installed and included things wherever I wanted to - not even taking into account how much space I was using. And although the scale of my other projects compared to this one isn't that large, the concept of keeping file size down is valid everywhere. ## What's next for LinkBreaker Little bit of cleaning up, and onto the Chrome Web Store she goes!
winning
## Inspiration At companies that want to introduce automation into their pipeline, finding the right robot, the cost of a specialized robotics system, and the time it takes to program a specialized robot is very expensive. We looked for solutions in general purpose robotics and imagining how these types of systems can be "trained" for certain tasks and "learn" to become a specialized robot. ## What it does The Simon System consists of Simon, our robot that learns to perform the human's input actions. There are two "play" fields, one for the human to perform actions and the other for Simon to reproduce actions. Everything starts with a human action. The Simon System detects human motion and records what happens. Then those actions are interpreted into actions that Simon can take. Then Simon performs those actions in the second play field, making sure to plan efficient paths taking into consideration that it is a robot in the field. ## How we built it ### Hardware The hardware was really built from the ground up. We CADded the entire model of the two play fields as well as the arches that hold the smartphone cameras here at PennApps. The assembly of the two play fields consist of 100 individual CAD models and took over three hours to fully assemble, making full utilization of lap joints and mechanical advantage to create a structurally sound system. The LEDs in the enclosure communicate with the offboard field controllers using Unix Domain Sockets that simulate a serial port to allow color change for giving a user info on what the state of the fields is. Simon, the robot, was also constructed completely from scratch. At its core, Simon is an Arduino Nano. It utilizes a dual H Bridge motor driver for controlling its two powered wheels and an IMU for its feedback controls system. It uses a MOSFET for controlling the electromagnet onboard for "grabbing" and "releasing" the cubes that it manipulates. With all of that, the entire motion planning library for Simon was written entirely from scratch. Simon uses a bluetooth module for communicating offboard with the path planning server. ### Software There are four major software systems in this project. The path planning system uses a modified BFS algorithm taking into account path smoothing with realtime updates from the low-level controls to calibrate path plan throughout execution. The computer vision systems intelligently detect when updates are made to the human control field and acquire normalized grid size of the play field using QR boundaries to create a virtual enclosure. The cv system also determines the orientation of Simon on the field as it travels around. Servers and clients are also instantiated on every part of the stack for communicating with low latency. ## Challenges we ran into Lack of acrylic for completing the system, so we had to refactor a lot of our hardware designs to accomodate. Robot rotation calibration and path planning due to very small inconsistencies in low level controllers. Building many things from scratch without using public libraries because they aren't specialized enough. Dealing with smartphone cameras for CV and figuring out how to coordinate across phones with similar aspect ratios and not similar resolutions. The programs we used don't run on windows such as Unix Domain Sockets so we had to switch to using a Mac as our main system. ## Accomplishments that we're proud of This thing works, somehow. We wrote modular code this hackathon and a solid running github repo that was utilized. ## What we learned We got better at CV. First real CV hackathon. ## What's next for The Simon System More robustness.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
## Inspiration Assistive Tech was our asigned track, we had done it before and knew we could innovate with cool ideas. ## What it does It adds a camera and sensors which instruct a pair of motors that will lightly pull the user in a direction to avoid a collision with an obstacle. ## How we built it We used a camera pod for the stick, on which we mounted the camera and sensor. At the end of the cane we joined a chasis with the motors and controller. ## Challenges we ran into We had never used a voice command system, paired with a raspberry pi and also an arduino, combining all of that was a real challenge for us. ## Accomplishments that we're proud of Physically completing the cane and also making it look pretty, many of our past projects have wires everywhere and some stuff isn't properly mounted. ## What we learned We learned to use Dialog Flow and how to prototype in a foreign country where we didn't know where to buy stuff lol. ## What's next for CaneAssist As usual, all our projects will most likely be fully completed in a later date. And hopefully get to be a real product that can help people out.
winning
## Inspiration Last week, one of our team members was admitted to the hospital with brain trauma. Doctors hesitated to treat them because of their lack of insight into the patient’s medical history. This prompted us to store EVERYONE’s health records on a single, decentralized chain. The catch? The process is end-to-end encrypted, ensuring only yourself and your designated providers can access your data. ## How we built it Zorg was built across 3 verticals: a frontend client, a backend server, and the chain. The frontend client… We poured our hearts and ingenuity into crafting a seamless user interface, a harmonious blend of aesthetics and functionality designed to resonate across the spectrum of users. Our aim? To simplify the process for patients to effortlessly navigate and control their medical records while enabling doctors and healthcare providers to seamlessly access and request patient information. Leveraging the synergy of Bun, React, Next, and Shadcn, we crafted a performant portal. To safeguard privacy, we fortified client-side interactions with encryption, ensuring sensitive data remains inaccessible to central servers. This fusion of technology and design principles heralds a new era of secure, user-centric digital healthcare record keeping. The backend server… The backend server of Zorg is the crux of our mission to revolutionize healthcare records management, ensuring secure, fast, and reliable access to encrypted patient data. Utilizing Zig for its performance and security advantages, our backend encrypts health records using a doctor's public key and stores them on IPFS for decentralized access. These records are then indexed on the blockchain via unique identifiers (CIDs), ensuring both privacy and immutability. Upon request, the system retrieves and decrypts the data for authorized users, transforming it into a vectorized format suitable for semantic search. This process not only safeguards patient information but also enables healthcare providers to efficiently parse through detailed medical histories. Our use of Zig ensures that these operations are executed swiftly, maintaining our commitment to providing immediate access to critical medical information while prioritizing patient privacy and data integrity. The chain… The chain stores the encrypted key and the CID, allowing seemless access to a patient’s file stored on decentralized storage (IPFS). The cool part? The complex protocols and keys governing this system is completely abstracted away and wrapped up modern UI/UX, giving easy access to senior citizens and care providers. ## Challenges we ran into Our biggest challenges were during integration, near the end of the hackathon. We had divided the project, with each person focusing on a different area—machine learning and queries, blockchain and key sharing, encryption and IPFS, and the frontend design. However, when we began to put things together, we quickly realized that we had failed to communicate with each other the specific details of how each of our systems worked. As a result we had to spend a few hours just tweaking each of our systems so that they could work with each other. Another smaller (but enjoyable!) challenge we faced was learning to use a new language (Zig!). We ended up building our entire encryption and decryption system in Zig (as it needed to be incredibly fast due to the potentially vast amounts of data it would be processing) and had to piece together both how to build these systems in Zig, and how to integrate the resulting Zig binaries into the rest of our project. ## What's next for Zorg In the future, we hope to devise a cryptographically sound way to revoke access to records after they have been granted. Additionally, our system would best benefit the population if we were able to partner with the government to include patient private keys in something everyone carries with them like their phone or ID so that in an emergency situation, first responders can access the patient data and identify things like allergies to medications.
## Inspiration Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med! ## What it does This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data. ## How we built it We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project. ## Challenges we ran into We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left. ## Accomplishments that we're proud of We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting. ## What we learned We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha! ## What's next for EZ-Med The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
# flora Flora is a community-based website dedicated to breaking the taboo around womxn’s sexual & reproductive health. We believe that womxn should be able to get the care they need without fear of judgment from others. We know how embarrassing and difficult it can be to talk about these topics - trust us, we’ve been there. The Flora team consists of four driven and passionate women - Anika, Doris, Deborah and Isabel. This project was inspired by two larger, current problems - womxn's inability to have regular checkups with gynecologists due to the COVID-19 pandemic; and womxn's health being overlooked in school education and deemed "inappropriate" to talk about. Being womxn of color, we experience additional obstacles related to our sexual & reproductive health. From stigmas within POC families and communities, to the implicit biases against POC womxn perpetuated by health professionals, we are extremely aware that these problems still exist in even the most developed societies. We created Flora to address these issues and create a safe space for all womxn to talk about problems they have related to their sexual and reproductive health. One of the biggest challenges we faced as a team was narrowing down our idea so that is both unique and effective. The broader topic we started with was womxn’s health, and it was hard to narrow it down because there’s just so many ways we can approach that. We originally wanted the website to be more educational, but we didn’t know how to deliver information without just regurgitating words from Google searches. Ultimately, we decided to create a community-based platform so that people & doctors can directly interact with others. Our second biggest challenge was learning how to use new technology - while all four of us have had coding experience, we decided to branch out beyond our comfort zones and use new tools such as Dialogflow. While we struggled at times, we were able to create a functional chatbot and website. Overall, this experience was amazing for the four of us, and we'll always remember it.
winning
## What it does Remix allows users to select ingredients they have on hand and presents them with corresponding smoothie recipe options. It also allows users to browse smoothie recipes. The app boasts a crisp, elegant UI and is designed to be both fun and accessible for users of all skill levels. The app allows users to select ingredients they have on hand and presents them with corresponding smoothie recipe options. They can also browse recipes by category and save their favorite recipes for easy access later. The app also includes a shopping list feature, allowing users to easily add ingredients they need to purchase to make a recipe. ## How we built it The app was built using React Native, a framework for building mobile apps using JavaScript and React. This allowed for a smooth, native-like experience for users across both iOS and Android platforms. The back-end was built using Python with the Flask framework, and MongoDB, a non-relational database. This combination allowed for a fast and efficient way to store and retrieve user data and recipe information. ## What's next for Remix There are several ways we could potentially extend the app in the future. One possibility is to add a community feature where users can share and rate recipes, allowing for a more collaborative and social experience. Another potential addition could be a meal planning feature, where users can plan out their smoothie recipes for the week and easily add ingredients to their shopping list. Additionally, we could also add a feature to allow users to search for recipes by dietary restriction, such as vegan or gluten-free.
## Inspiration We wanted to tackle the challenge of left-over ingredients. ## What it does We want people to have access to a wide selection of recipes when they are not sure what to do with them ## How we built it We used HTML, CSS and JS to create the front-end. Our database is using a local json file that ## Challenges we ran into ## Accomplishments that we're proud of Being able to parse a json file and create an function that goes through each ingredient available per recipe. ## What we learned We learned that scope is important and that the most important thing to finish is having a minimum viable product that can showcase the main feature of the app ## What's next for Sustainable Recipes (Suscipe) Looking for an API that gives us detailed instructions so that we can guide the user on how to make the recipe step-by-step. Improving our search algorithm to allow users to put in dietary restrictions.
## Inspiration We live in increasingly polarizing times. A poll published by CNN in 2017 showed that around two thirds of Democrats and one half of Republicans have either a few, or no friends in the opposite party. Doozy seeks to improve political discourse by allowing for conversation surrounding differences while showcasing what makes us similar. ## What it does Doozy is a social networking platform that considers someone's personal interests, as well as a few viewpoints on current issues in today's political sphere. Doozy then matches individuals on the site to chat that have shared interests, but some kind of disagreement in terms of political viewpoint. This way, Doozy users are united by their similarities, while still encouraging room for healthy, and productive, disagreement. ## How we built it The front end of Doozy was developed using Angular web framework. Database management was done in Python and mySQL. Algorithm and data analytics were done in Python and Standard Library (Google Sheets API Communications). ## Challenges we ran into Combining all the different frameworks/languages/softwares that we used. ## Accomplishments that we're proud of Unifying three very different pieces of software. ## What we learned How to create a full web app, including front end, back end, and data analysis. ## What's next for Doozy Clean up
losing
## Inspiration Only a small percentage of Americans use ASL as their main form of daily communication. Hence, no one notices when ASL-first speakers are left out of using FaceTime, Zoom, or even iMessage voice memos. This is a terrible inconvenience for ASL-first speakers attempting to communicate with their loved ones, colleagues, and friends. There is a clear barrier to communication between those who are deaf or hard of hearing and those who are fully-abled. We created Hello as a solution to this problem for those experiencing similar situations and to lay the ground work for future seamless communication. On a personal level, Brandon's grandma is hard of hearing, which makes it very difficult to communicate. In the future this tool may be their only chance at clear communication. ## What it does Expectedly, there are two sides to the video call: a fully-abled person and a deaf or hard of hearing person. For the fully-abled person: * Their speech gets automatically transcribed in real-time and displayed to the end user * Their facial expressions and speech get analyzed for sentiment detection For the deaf/hard of hearing person: * Their hand signs are detected and translated into English in real-time * The translations are then cleaned up by an LLM and displayed to the end user in text and audio * Their facial expressions are analyzed for emotion detection ## How we built it Our frontend is a simple React and Vite project. On the backend, websockets are used for real-time inferencing. For the fully-abled person, their speech is first transcribed via Deepgram, then their emotion is detected using HumeAI. For the deaf/hard of hearing person, their hand signs are first translated using a custom ML model powered via Hyperbolic, then these translations are cleaned using both Google Gemini and Hyperbolic. Hume AI is used similarly on this end as well. Additionally, the translations are communicated back via text-to-speech using Cartesia/Deepgram. ## Challenges we ran into * Custom ML models are very hard to deploy (Credits to <https://github.com/hoyso48/Google---American-Sign-Language-Fingerspelling-Recognition-2nd-place-solution>) * Websockets are easier said than done * Spotty wifi ## Accomplishments that we're proud of * Learned websockets from scratch * Implemented custom ML model inferencing and workflows * More experience in systems design ## What's next for Hello Faster, more accurate ASL model. More scalability and maintainability for the codebase.
## Inspiration Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world. ## What it does Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation. We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box. ## How we built it For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives: 1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model. 2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users. * Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience. * Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data. * Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app. * NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users. * Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences ## Challenges we ran into Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL. ## Accomplishments that we're proud of We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are: * Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community. * Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement. * User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority. * Speech Recognition: Our success in implementing speech recognition technology is a source of pride. * Multiple AI Models: We also loved merging natural language processing and computer vision in the same application. ## What we learned We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project. ## What's next? We have a ton of ideas planned for Sign Sync next! * Translate between languages other than English * Translate between other sign languages, not just ASL * Native mobile app with no internet access required for more seamless usage * Usage of more sophisticated datasets that can recognize words and not just letters * Use a video image to demonstrate the sign language component, instead of static images
## Inspiration Our inspiration for developing the Hospital Inventory Manager stemmed from the pressing need for efficient healthcare resource management and improved patient outcomes, particularly in the context of organ donation and allocation. ## What it does The Hospital Inventory Manager is a comprehensive platform designed to streamline various aspects of hospital resource management, including inventory tracking, scheduling, patient management, and organ matching. The system incorporates QR code functionality for quick access to vital information. ## How we built it * **PostgreSQL**: Utilized as the database to store all our data. * **Pandas**: Employed for data manipulation and analysis. * **QR Code Generation Library**: Used for creating QR codes for inventory items and patient identification. * **Git**: Implemented for version control and collaborative development. ## Challenges we ran into We faced several challenges, including: * Integrating multiple complex systems (inventory, scheduling, patient management) into a cohesive platform. * Ensuring data privacy and security, particularly regarding sensitive patient information. * Implementing an efficient organ matching algorithm. * Creating a user-friendly interface capable of handling complex operations. * Managing real-time updates across different sections of the application. ## Accomplishments that we're proud of We take pride in several achievements: * Developing a fully functional prototype that addresses various aspects of hospital resource management. * Implementing QR code functionality for quick access to information. * Creating a system that can potentially save lives by enhancing organ donation and allocation efficiency. * Building a scalable solution adaptable for various healthcare settings. ## What we learned Through this project, we gained valuable insights: * The complexities of healthcare logistics and the critical need for efficient resource management in hospitals. * The integration of multiple Python libraries and technologies to create a comprehensive web application. * The significance of user experience design in healthcare applications. * Techniques for securely handling sensitive data. * The importance of iterative development and continuous testing in building robust software. ## What's next for CareCost Moving forward, we plan to: * Implement advanced predictive analytics for inventory management. * Enhance the organ matching algorithm using machine learning techniques. * Develop a mobile application for on-the-go access. * Integrate with existing hospital information systems for seamless data exchange. * Conduct pilot tests in real hospital environments to gather feedback and refine the system. * Explore the use of blockchain for secure and transparent organ donation tracking. * Expand the platform to include features for blood bank management and vaccine distribution. We believe that with further development and refinement, the Hospital Inventory Manager has the potential to significantly enhance healthcare resource management and improve patient outcomes.
partial
## Inspiration EV vehicles are environment friendly and yet it does not receive the recognition it deserves. Even today we do not find many users driving electric vehicles and we believe this must change. Our project aims to provide EV users with a travel route showcasing optimal (and functioning) charging stations to enhance the use of Electric Vehicles by resolving a major concern, range anxiety. We also believe that this will inherently promote the usage of electric vehicles amongst other technological advancements in the car industry. ## What it does The primary aim of our project is to display the **ideal route** to the user for the electric vehicle to take along with the **optimal (and functional) charging stations** using markers based on the source and destination. ## How we built it Primarily, in the backend, we integrated two APIs. The **first API** call is used to fetch the longitude as well as latitude coordinates of the start and destination addresses while the **second API** was used to locate stations within a **specific radius** along the journey route. This computation required the start and destination addresses leading to the display of the ideal route containing optimal (and functioning) charging points along the way. Along with CSS, the frontend utilizes **Leaflet (SDK/API)** to render the map which not only recommends the ideal route showing the source, destination, and optimal charging stations as markers but also provides a **side panel** displaying route details and turn-by-turn directions. ## Challenges we ran into * Most of the APIs available to help develop our application were paid * We found a **scarcity of reliable data sources** for EV charging stations * It was difficult to understand the documentation for the Maps API * Java Script ## Accomplishments that we're proud of * We developed a **fully functioning app in < 24 hours** * Understood as well as **integrated 3 APIs** ## What we learned * Team work makes the dream work: we not only played off each others strengths but also individually tried things that are out of our comfort zone * How Ford works (from the workshop) as well as more about EVs and charging stations * We learnt about new APIs * If we have a strong will to learn and develop something new, we can no matter how hard it is; We just have to keep at it ## What's next for ChargeRoute Navigator: Enhancing the EV Journey * **Profile** | User Account: Display the user's profile picture or account details * **Accessibility** features (e.g., alternative text) * **Autocomplete** Suggestions: Provide autocomplete suggestions as users type, utilizing geolocation services for accuracy * **Details on Clicking the Charging Station (on map)**: Provide additional information about each charging station, such as charging speed, availability, and user ratings * **Save Routes**: Allow users to save frequently used routes for quick access. * **Traffic Information (integration with GMaps API)**: Integrate real-time traffic data to optimize routes * **User feedback** about (charging station recommendations and experience) to improve user experience
## Inspiration One charge of the average EV's battery uses as much electricity as a house uses every 2.5 days. This puts a huge strain on the electrical grid: people usually plug in their car as soon as they get home, during what is already peak demand hours. At this time, not only is electricity the most expensive, but it is also the most carbon-intensive; as much as 20% generated by fossil fuels, even in Ontario, which is not a primarily fossil-fuel dependent region. We can change this: by charging according to our calculated optimal time, not only will our users save money, but save the environment. ## What it does Given an interval in which the user can charge their car (ex., from when they get home to when they have to leave in the morning), ChargeVerte analyses live and historical data of electricity generation to calculate an interval in which electricity generation is the cleanest. The user can then instruct their car to begin charging at our recommended time, and charge with peace of mind knowing they are using sustainable energy. ## How we built it ChargeVerte was made using a purely Python-based tech stack. We leveraged various libraries, including requests to make API requests, pandas for data processing, and Taipy for front-end design. Our project pulls data about the electrical grid from the Electricity Maps API in real-time. ## Challenges we ran into Our biggest challenges were primarily learning how to handle all the different libraries we used within this project, many of which we had never used before, but were eager to try our hand at. One notable challenge we faced was trying to use the Flask API and React to create a Python/JS full-stack app, which we found was difficult to make API GET requests with due to the different data types supported by the respective languages. We made the decision to pivot to Taipy in order to overcome this hurdle. ## Accomplishments that we're proud of We built a functioning predictive algorithm, which, given a range of time, finds the timespan of electricity with the lowest carbon intensity. ## What we learned We learned how to design critical processes related to full-stack development, including how to make API requests, design a front-end, and connect a front-end and backend together. We also learned how to program in a team setting, and the many strategies and habits we had to change in order to make it happen. ## What's next for ChargeVerte A potential partner for ChargeVerte is power-generating companies themselves. Generating companies could package ChargeVerte and a charging timer, such that when a driver plugs in for the night, ChargeVerte will automatically begin charging at off-peak times, without any needed driver oversight. This would reduce costs significantly for the power-generating companies, as they can maintain a flatter demand line and thus reduce the amount of expensive, polluting fossil fuels needed.
## Inspiration Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them. ## What it does CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph. ## How we built it We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud. ## Challenges we ran into Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way. Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code. The last challenge that we ran into was getting our front-end to play nicely with our backend code ## Accomplishments that we're proud of We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs. ## What We learned Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server. ## What's next for CarChart We would like to expand the front-end to have even more functionality Some of the features that we would like to include would be: * Letting users pick lists of cars that they are interested and compare * Displaying each datapoint with an image of the car * Adding even more dimensions that the user is allowed to search by ## Check the Project out here!! <https://pennapps-xx-252216.appspot.com/>
partial
## Inspiration Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders. ## What it does Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout. ## How we built it The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive. ## Challenges we ran into > > I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry* > > > ## Accomplishments that we're proud of > > This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry* > > > ## What we learned > > Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry* > > > ## What's next for Quick Quote Things that are on our roadmap to improve Quick Quote: * Apply google analytics to track user's movement and collect feedbacks to enhance our UI. * Enhance our neural network model to enrich our knowledge base. * Train our data with more evalution to give more depth * Includes ads (mostly auto companies ads).
## Inspiration As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become. ## What it does HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors. ## How we built it For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in. The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time. ## Challenges we ran into Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner. ## Accomplishments that we're proud of Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases. ## What we learned We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable. ## What's next for HandyTrack Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics.
## Inspiration We are inspired by how Machine Learning can streamline a lot of our lives and minimize possible errors which occurs. In the healthcare and financial field, one of the issues which happens the most in the Insurance field is how to best evaluate a quote for the consumer. Therefore, upon seeing the challenge online during the team-formation period, we decided to work on it and devise an algorithm and data model for each consumers, along with a simple app for consumers to use on the front end. ## What it does Upon starting the app, the user can check to see different plans offered by the company. It is listed in a ScrollView Table and customers can hence have a simple idea of what kind of deals/packages there are. Then, the user can proceed to the "Information" page, and fill out their personal information to request a quotation from the system, where the user data is transmitted to our server and the predictions are being made there. Then, the app is returned with a suitable plan for the user, along with other data graphs to illustrate the general demographics of the participants of the program. ## How we built it The app is built using React-Native, which is cross-platform compatible for iOS, Android and WebDev. While for the model, we used r and python to train it. We also used Kibana to perform data visualization and elasticsearch as the server. ## Challenges we ran into It is hard to come up with more filters in further perfecting our model with the sample data set from observing the patterns within the data set. ## Accomplishments that we're proud of Improving the accuracy of the model by two times the original that we started off with by applying different filters and devising different algorithms. ## What we learned We are now more proficient in terms of training models, developing React Native applications, and using Machine Learning in solving daily life problems by spotting out data patterns and utilizing them to come up with algorithms for the data set. ## What's next for ViHack Further fine-tuning of the recognition model to improve upon the percentage of correct predictions of our currently-trained model .
winning
Our web application, Econvert, is a web-based tool that allows the user to select any currency (euros, mexican pesos, pounds, yen, etc), any ancient currencies (shekels, romanian currency, silver coins, etc), and even in-game currencies (vbucks, robux, rocket league credits, etc). The application frontend and backend is managed all in Python, by importing the Reflex API to create our starting HTML, CSS, and JavaScript template. Our backend is run on Reflex to allow our Python code to be converted into JavaScript and we also implement the Together AI API to work as our currency converter. Our site takes in two inputs from the user, the currency they want the usd amount to be converted into, and the amount of money (usd). Our AI then returns the conversion amount and it is displayed to the user in a chatbot type format.
## Inspiration It's very difficult to accurately estimate how much food is going into your stomach. Many existing apps provide inaccurate calorie counts since the food items are largely generalized. ## What it does Our app allows users to take pictures of food items and it returns the amount of calories, what they must do to burn that many calories, as well as keep track of what food and how many calories they are eating throughout the day. ## How I built it Programmed the stepper motor along with the pressure sensor using the Arduino IDE. Created the Android app using Android Studio. Connected to CloudSight API for object recognition processing. ## Challenges I ran into Setting up the CloudSight API was quite troublesome. ## Accomplishments that I'm proud of Creating a polished mechanical system that is not only functional, but aesthetically pleasing as well. Learning about image processing. ## What I learned Before we finalized our project, we learned the process of 3D reconstruction using 2D images. We improved our image processing skills using OpenCV and OpenGL. Also, we learned how to integrate APIs for object recognition. ## What's next for Food Facts Food Facts can be used by anyone who watches their diet, whether they are trying to gain muscle or lose fat. It will be more accurate than other apps on the market currently and it is an easier and faster process.
## Inspiration Remember the thrill of watching mom haggle like a pro at the market? Those nostalgic days might seem long gone, but here's the twist: we can help you carry out the generational legacy. Introducing our game-changing app – it's not just a translator, it’s your haggling sidekick. This app does more than break down language barriers; it helps you secure deals. You’ll learn the tricks to avoid the tourist trap and get the local price, every time. We’re not just reminiscing about the good old days; we’re rebooting them for the modern shopper. Get ready to haggle, bargain, and save like never before! ## What it does Back to the Market is a mobile app specifically crafted to enhance communication and negotiation for users in foreign markets. The app shines in its ability to analyze quoted prices using local market data, cultural norms, and user-set preferences to suggest effective counteroffers. This empowers users to engage in informed and culturally appropriate negotiations, without being overcharged. Additionally, Back to the Market offers a customization feature, allowing users to tailor their spending limits. The user-interface is simple and cute, making it accessible for a broad range of users regardless of their technical interface. Its integration of these diverse features positions Back to the Market not just as a tool for financial negotiation, but as a comprehensive companion for a more equitable, enjoyable, and efficient international shopping experience. ## How we built it Back to the Market was built by separating the front-end from the back-end. The front-end consists of React-Native, Expo Go, and Javascript to develop the mobile app. The back-end consists of Python, which was used to connect the front-end to the back-end. The Cohere API was used to generate the responses and determine appropriate steps to take during the negotiation process. ## Challenges we ran into During the development of Back to the Market, we faced two primary challenges. First was our lack of experience with React Native, a key technology for our app's development. While our team was composed of great coders, none of us had ever used React prior to the competition. This meant we had to quickly learn and master it from the ground up, a task that was both challenging and educational. Second, we grappled with front-end design. Ensuring the app was not only functional but also visually appealing and user-friendly required us to delve into UI/UX design principles, an area we had little experience with. Luckily, through the help of the organizers, we were able to adapt quickly with few problems. These challenges, while demanding, were crucial in enhancing our skills and shaping the app into the efficient and engaging version it is today. ## Accomplishments that we're proud of We centered the button on our first try 😎 In our 36 hours journey with Back to the Market, there are several accomplishments that stand out. Firstly, successfully integrating Cohere for the both the translation and bargaining aspects of the app was a significant achievement. This integration not only provided robust functionality but also ensured a seamless user experience, which was central to our vision. Secondly, it was amazing to see how quickly we went from zero React-Native experience to making an entire app with it in less than 24 hours. We were able to create both an aesthetically pleasing and highly functional. This rapid skill acquisition and application in a short time frame was a testament to our team's dedication and learning agility. Finally, we take great pride in our presentation and slides. We managed to craft an engaging and dynamic presentation that effectively communicated the essence of Back to the Market. Our ability to convey complex technical details in an accessible and entertaining manner was crucial in capturing the interest and understanding of our audience. ## What we learned Our journey with this project was immensely educational. We learned the value of adaptability through mastering React-Native, a technology new to us all, emphasizing the importance of embracing and quickly learning new tools. Furthermore, delving into the complexities of cross-cultural communication for our translation and bargaining features, we gained insights into the subtleties of language and cultural nuances in commerce. Our foray into front-end design taught us about the critical role of user experience and interface, highlighting that an app's success lies not just in its functionality but also in its usability and appeal. Finally, creating a product is the easy part, making people want it is where a lot of people fall. Thus, crafting an engaging presentation refined our storytelling and communication skills. ## What's next for Back to the Market Looking ahead, Back to the Market is poised for many exciting developments. Our immediate focus is on enhancing the app's functionality and user experience. This includes integrating translation features to allow users to stay within the app throughout their transaction. In parallel, we're exploring the incorporation of AI-driven personalization features. This would allow Back to the Market to learn from individual user preferences and negotiation styles, offering more tailored suggestions and improving the overall user experience. The idea can be expanded by creating a feature for users to rate suggested responses. Use these ratings to refine the response generation system by integrating the top-rated answers into the Cohere model with a RAG approach. This will help the system learn from the most effective responses, improving the quality of future answers. Another key area of development is utilising computer vision so that users can simply take a picture of the item they are interested in purchasing instead of having to input an item name, which is especially handy in areas where you don’t know exactly what you’re buying (ex. cool souvenir). Furthermore, we know that everyone loves a bit of competition, especially in the world of bargaining where you want the best deal possible. That’s why we plan on incorporating a leaderboard for those who save the most money via our negotiation tactics.
losing
## Inspiration We, as passionate tinkerers, understand the struggles that come with making a project come to life (especially for begineers). **80% of U.S. workers agree that learning new skills is important, but only 56% are actually learning something new**. From not knowing how electrical components should be wired, to not knowing what a particular component does, and what is the correct procedure to effectively assemble a creation, TinkerFlow is here to help you ease this process, all in one interface. ## What it does -> Image identification/classification or text input of available electronic components -> Powered by Cohere and Groq LLM, generates wiring scheme and detailed instructions (with personality!) to complete an interesting project that is possible with electronics available -> Using React Flow, we developed our own library (as other existing softwares were depreciated) that generates electrical schematics to make the fine, precise and potentially tedious work of wiring projects easier. -> Display generated text of instructions to complete project ## How we built it We allowed the user to upload a photo, have it get sent to the backend (handled by Flask), used Python and Google Vision AI to do image classification and identify with 80% accuracy the component. To provide our users with a high quality and creative response, we used a central LLM to find projects that could be created based on inputted components, and from there generate instructions, schematics, and codes for the user to use to create their project. For this central LLM, we offer two options: Cohere and Groq. Our default model is the Cohere LLM, which using its integrated RAG and preamble capability offers superior accuracy and a custom personality for our responses, providing more fun and engagement for the user. Our second option Groq though providing a lesser quality of a response, provides fast process times, a short coming of Cohere. Both of these LLM's are based on large meticulously defined prompts (characterizing from the output structure to the method of listing wires), which produce the results that are necessary in generating the final results seen by the user. In order to provide the user with different forms of information, we decide to present electrical schematics on the webpage. However during the development due to many circumstances, our group had to use simple JavaScript libraries to create its functionality. ## Challenges we ran into * LLM misbehaving: The biggest challenge in the incorporation of the Cohere LLM was the ability to generate consistent results through the prompts used to generate the results needed for all of the information provided about the project proposed. The solution to this was to include a very specifically defined prompts with examples to reduce the amount of errors generated by the LLM. * Not able to find a predefined electrical schematics library to use to generate electrical schematics diagrams, there we had start from scratch and create our own schematic drawer based on basic js library. ## Accomplishments that we're proud of Create electrical schematics using basic js library. Create consistent outputting LLM's for multiple fields. ## What we learned Ability to overcome troubles - consistently innovating for solutions, even if there may not have been an easy route (ex. existing library) to use - our schematic diagrams were custom made! ## What's next for TinkerFlow Aiming for faster LLM processing speed. Update the user interface of the website, especially for the electrical schematic graph generation. Implement the export of code files, to allow for even more information being provided to the user for their project.
## Inspiration We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals. ## What it does ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language. ## How we built it We built the frontend UI using React, Javascript, HTML and CSS. For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM. Finally, for user authentication, we made use of Firebase. ## Challenges we faced + What we learned When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug. Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that. ## Accomplishments that we're proud of This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay. We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things. ## What's next for ReadRight As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
## Inspiration Behind Tangle ⚛️ The idea for Tangle emerged from a recognition that traditional networking methods, such as LinkedIn profiles, have remained static, even as large in-person events like Hack the North make a strong comeback. We saw an opportunity to reimagine how people connect by creating a more dynamic and visual way of representing relationships formed at events. Tangle aims to bridge the gap by providing an interactive web app that helps participants remember, revisit, and reinforce connections made during events, ensuring those relationships don’t just fade into the background.🤝 ## How We Built Tangle 🛠️ Tangle is a web app powered by React for the front end and Convex for the back end, with the entire platform hosted on Vercel. We used a GoDaddy domain to make Tangle easily accessible to all users.🌐 Our backend, built with Convex, integrates a vector embedding search powered by Cohere. This allows us to offer advanced search functionality that helps users locate people they’ve met based on attributes and interactions. On the front end, we designed a simple yet intuitive user interface in React, focusing on ease of navigation. The landing page introduces users to Tangle, while the home page allows them to search for individuals or features related to people they've met at the event. The query is sent to the backend, where it’s processed, and the results are returned in real-time.⚡️ ## Challenges We Overcame Building Tangle 🚧 **Design Challenges 🎨** One of our main challenges was creating a platform that caters to different event attendees—hackers, recruiters, and speakers—all of whom have different goals. Designing Tangle to meet the needs of all these groups while maintaining a cohesive user experience required careful planning and iteration. **Vector Embeddings 🧠** A technical challenge was mastering the use of vector embeddings. These embeddings were critical for enabling intelligent search functions within Tangle. We invested considerable time in optimizing the embedding process to accurately capture the nuanced relationships between event attendees. **Personal Challenges 💪** Balancing external commitments and the time pressure of the hackathon was no easy feat. At one point, we experienced challenges leading to a demotivating lull 10 hours prior to submissions, but we pushed through as a team in the final hours to finish strong. ## Accomplishments We Celebrate at Tangle 🎉 * Successfully integrating Cohere’s vector embeddings for advanced semantic search within our app. * Mastering and utilizing Convex to build a robust and efficient backend. * Designing a user-friendly interface. * Overcoming personal time management challenges to deliver a working, innovative platform within the hackathon’s time constraints. ## Lessons Learned from Tangle's Journey 📚 **The Importance of Planning 📝** One of the key takeaways from building Tangle was the necessity of meticulous planning, especially regarding user flow and system architecture. Diving straight into coding without first understanding how users would interact with our app resulted in some avoidable setbacks and rework. We learned that investing time upfront in planning leads to a more streamlined and efficient development process. **Simplifying Systems 🧩** We initially overcomplicated parts of our application, which led to inefficiencies. Simplifying and focusing on the core functionalities of Tangle allowed us to optimize the system and deliver a better user experience.
winning
## Inspiration ## What it does PhyloForest helps researchers and educators by improving how we see phylogenetic trees. Strong, useful data visualization is key to new discoveries and patterns. Thanks to our product, users have a greater ability to perceive depth of trees by communicating widths rather than lengths. The length between proteins is based on actual lengths scaled to size. ## How we built it We used EggNOG to get phylogenetic trees in Newick format, then parsed them using a recursive algorithm to get the differences between the protein group in question. We connected names to IDs using the EBI (European Bioinformatics Institute) database, then took the lengths between the proteins and scaled them to size for our Unity environment. After we put together all this information, we went through an extensive integration process with Unity. We used EBI APIs for Taxon information, EggNOG gave us NCBI (National Center for Biotechnology Information) identities and structure. We could not use local NCBI lookup (as eggNOG does) due to the limitations of Virtual Reality headsets, so we used the EBI taxon lookup API instead to make the tree interactive and accurately reflect the taxon information of each species in question. Lastly, we added UI components to make the app easy to use for both educators and researchers. ## Challenges we ran into Parsing the EggNOG Newick tree was our first challenge because there was limited documentation and data sets were very large. Therefore, it was difficult to debug results, especially with the Unity interface. We also had difficulty finding a database that could connect NCBI IDs to taxon information with our VR headset. We also had to implement a binary tree structure from scratch in C#. Lastly, we had difficulty scaling the orthologs horizontally in VR, in a way that would preserve the true relationships between the species. ## Accomplishments that we're proud of The user experience is very clean and immersive, allowing anyone to visualize these orthologous groups. Furthermore, we think this occupies a unique space that intersects the fields of VR and genetics. Our features, such as depth and linearized length, would not be as cleanly implemented in a 2-dimensional model. ## What we learned We learned how to parse Newick trees, how to display a binary tree with branches dependent on certain lengths, and how to create a model that relates large amounts of data on base pair differences in DNA sequences to something that highlights these differences in an innovative way. ## What's next for PhyloForest Making the UI more intuitive so that anyone would feel comfortable using it. We would also like to display more information when you click on each ortholog in a group. We want to expand the amount of proteins people can select, and we would like to manipulate proteins by dragging branches to better identify patterns between orthologs.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration ## What it does ## It searches for a water Bottle! ## How we built it ## We built it using a roomba, raspberrypi with a picamera, python, and Microsoft's Custom Vision ## Challenges we ran into ## Attaining wireless communications between the pi and all of our PCs was tricky. Implementing Microsoft's API was troublesome as well. A few hours before judging we exceeded the call volume for the API and were forced to create a new account to use the service. This meant re-training the AI and altering code, and loosing access to our higher accuracy recognition iterations during final tests. ## Accomplishments that we're proud of ## Actually getting the wireless networking to consistently. Surpassing challenges like framework indecision between IBM and Microsoft. We are proud of making things that lacked proper documentation, like the pi camera, work. ## What we learned ## How to use Github, train AI object recognition system using Microsoft APIs, write clean drivers ## What's next for Cueball's New Pet ## Learn to recognize other objects.
winning
## Inspiration COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse. Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general. ## What it does In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed. After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion **Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.** **Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.** ## How we built it Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS. ## Challenges we ran into Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it. ## Accomplishments that we're proud of We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility. And our two last minute accessibility add-ons! ## What we learned It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective. Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees. ## What's next for empath.ly From short-term to long-term stretch goals: * We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience. * We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions. * We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user.
## Inspiration As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness! ## What it does DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels. Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly! ## How we built it DuoASL is built up of two separate components; **Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend **Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end. ## Challenges we ran into As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer. ## Accomplishments that we're proud of We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project! ## What we learned We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow ## What's next for DuoASL We would like to: * Integrate video feedback, that provides detailed steps on how to improve (using an LLM?) * Add more words to our model! * Create a practice section that lets you form sentences! * Integrate full mobile support with a PWA!
## Story Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment. ### Diagnosis Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings. We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling. At the same time, we wanted to use an everyday household object to make it accessible to everyone. ### Treatment Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal. ## Technical Details ### Alexa The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech. ### IBM Watson/Bluemix The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text. ### Google App Engine The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts. ### Google Vision Machine Learning We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine ### Data Visualization Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal.
winning
## Inspiration Being at wealthsimple, thinking of something fintech, working with chrome extensions, building a simple tool ## What it does Snag searches through twitter feeds and filters them by customized location, keyword, text, & emotion to hone in on potential customers. ## How we built it * PostgreSQL on the back end to save potential customer messages * Ruby on Rails to parse twitter messages, filter through Indico and provide API endpoints * React to consume API endpoints and generate views * Chrome to pack into a simple extension tool for easy use ## Challenges we ran into * Time constraints * API limitations (give us twitter firehose) * Data quantity and quality * Machine learning algorithms (or lack thereof) ## Accomplishments that we're proud of * Making something that works in a short period of time (the data it pulls is pretty relevant!) ## What we learned * Chrome extensions * Delved a bit into machine learning and realized that it was a pretty hard thing to get right * Building an app from bottom to top ## What's next for Snag * Pulling messages from other platforms such as Reddit, Facebook, Medium, etc... * Improved filtering capabilities * Customized for individual companies
## Inspiration We set out to solve two problems with Genesis: First, the high barrier of access both financially and legally for retail investors to invest in startups. Second, founders of almost all early stage startup holds almost nothing but high risk, illiquid equity that is no different than ink on some paper. ## What it does Genesis fundamentally changes the way founders and investors interact in three crucial ways: 1. The Genesis dApp, built on the Solana blockchain allows startup founders to mint a portion of their equity as Genesis tokens to be auctioned as fractional equity on the Genesis auction platform. The founder decides however much equity to mint into (up to 5,000) tokens, and inputs relevant company information such as user counts, current revenue, past fundraising, and more to be displayed on their business page. This page serves as a place for investors to mint equity tokens, and upon the minting of all units, token holders are able to auction Minted Genesis tokens of a founder's equity on the same page. 2. In order for minting to be successful, the founder must provide a series of personal & company identity verifications - their website url, Twitter handle, operating agreement, driver's license, etc. The founders’ responses & documents are then hashed by IPFS, stored on chain, and embedded into their business profile page for investors to perform due diligence. We handle the storage process using IPFS, a distributed file storage protocol. 3. The founder sets a threshold of minimum token holding for an investor to convert their tokens into real, legal equity. We handle this conversion in an entirely decentralized manner using Solana smart contracts. When the investor initiates a conversion and burns their tokens, the founder is provided with an in-app, natural language agreement, once both parties fill in their fields and sign with their wallet addresses, Genesis transforms the legal contract into a machine-readable object that is hashed and stored on the Solana chain. By removing unnecessary friction from the startup investment process using Defi & Smart Contracts, we want to make it easier for founders to access liquidity, raise capital, and provide retail investors with an equal shot at VC level returns. ## How we built it * We created the Frontend UI & logic using React with Typescript, specifically with the Solana web3.js & Chakra component libraries. * We created both the Genesis token minting process and the auction order book logic as smart contracts from scratch on the Solana blockchain using Rust & Anchor's testing library * We used IPFS(distributed protocol) for hashing documents and placing them onto the Blockchain. ## Challenges we ran into This was the first time any of us had built anything web3 related, so we had to continuously adjust our product vision to be realistic to the 24 hour time constraints yet still deliver an impressive product. Since we used Chakra on the front-end, it was difficult to resolve dependency conflicts with the web3.js library, resulting in hours of debugging our app in order to compile. We eventually fixed this issue by manually troubleshooting the JSON library and re-routing our react app. Being too perfectionist when it comes to UI. All of us came into this Hackathon with zero experience when it comes to writing Solana programs, so implementing the logic for both minting and an auction order book on chain using rust was one of the biggest programming challenges we had faced. Fortunately, Anchor provided a very helpful testing library that greatly accelerated our dev journey. ## Accomplishments that we're proud of This is an idea with real world impact - by making startup equity more accessible for retail investors, we level the playing field towards a more decentralized, equitable and democratic start-up eco-system. Despite the difficulty we faced from all fronts, we still created a beautiful front-end and a fully functional, on-chain backend for a complex idea that serves as a viable alternative to traditional fundraising. ## What we learned * Building smart contracts from scratch with rust on the Solana network. * We got much better at writing more elegant code, especially on the back-end. * Organizing complex interactions between blockchain wallets into a decentralized system. * Decentralized file storage, hashing, different protocols, testing libraries, and smart contract standards. * Type-based languages are kind of a nightmare for hackathons. * "Are we still addressing the pain point?" is a useful question when pivoting a product vision to adjust to limitations. ## What's next for Genesis * Although we implemented our token burning and on chain token to equity logic into functional Solana programs, we did not have time to integrate it with our frontend. We're gonna pull another all-nighter and build this thing out!
## Inspiration Want to see how a product, service, person or idea is doing in the court of public opinion? Market analysts are experts at collecting data from a large array of sources, but monitoring public happiness or approval ratings is notoriously difficult. Usually, focus groups and extensive data collection is required before any estimates can be made, wasting both time and money. Why bother with all of this when the data you need can be easily mined from social media websites such as Twitter? Through aggregating tweets, performing sentiment analysis and visualizing the data, it would be possible to observe trends on how happy the public is about any topic, providing a valuable tool for anybody who needs to monitor customer satisfaction or public perception. ## What it does Queries Twitter Search API to return relevant tweets that are sorted into buckets of time. Sentiment analysis is then used to categorize whether the tweet is positive or negative in regards to the search term. The collected data is visualized with graphs such as average sentiment over time, percentage of positive to percentage of negative tweets, and other in depth trend analyses. An NLP algorithm that involves the clustering of similar tweets was developed to return a representative summary of good and bad tweets. This can show what most people are happy or angry about and can provide insight on how to improve public reception. ## How we built it The application is split into a **Flask** back-end and a **ReactJS** front-end. The back-end queries the Twitter API, parses and stores relevant information from the received tweets, and calculates any extra statistics that the front-end requires. The back-end then provides this information in a JSON object that the front-end can access through a `get` request. The React front-end presents all UI elements in components styled by [Material-UI](https://material-ui.com/). [React-Vis](https://uber.github.io/react-vis/) was utilized to compose charts and graphs that presents our queried data in an efficient and visually-appealing way. ## Challenges we ran into Twitter API throttles querying to 1000 tweets per minute, a number much less than what this project needs in order to provide meaningful data analysis. This means that by itself, after returning 1000 tweets we would have to wait another minute before continuing to request tweets. With some keywords returning hundreds of thousands of tweets, this was a huge problem. In addition, extracting a representative summary of good and bad tweet topics was challenging, as features that represent contextual similarity between words are not very well defined. Finally, we found it difficult to design a user interface that displays the vast amount of data we collect in a clear, organized, and aesthetically pleasing manner. ## Accomplishments that we're proud of We're proud of how well we visualized our data. In the course of a weekend, we managed to collect and visualize a large sum of data in six different ways. We're also proud that we managed to implement the clustering algorithm. In addition, the application is fully functional with nothing manually mocked! ## What we learned We learnt about several different natural language processing techniques. We also learnt about the Flask REST framework and best practices for building a React web application. ## What's next for Twitalytics We plan on cleaning some of the code that we rushed this weekend, implementing geolocation filtering and data analysis, and investigating better clustering algorithms and big data techniques.
losing
## Inspiration Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med! ## What it does This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data. ## How we built it We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project. ## Challenges we ran into We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left. ## Accomplishments that we're proud of We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting. ## What we learned We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha! ## What's next for EZ-Med The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40> ## Inspiration During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods. Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take. ## What it does Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing. ## How we built it Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application. ## Challenges we ran into The consistent challenge that we seemed to run into was the integration between our parts. Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding. ## Accomplishments that we're proud of During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end. Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative. ## What we learned Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies. ## What's next for Medisafe Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
## Inspiration The idea for Clustr came out of a problem that our friend group encountered after being sent home during the COVID-19 pandemic. We struggled to find the same rhythm that our conversations had when we were in person. Group chats were too intrusive, taking too much time to engage in and supplying an endless stream of notifications. Post-based social media apps seemed too formal and took too much effort to update frequently. We needed an app that allowed us to have the type of conversations that we had in our common rooms. We needed a virtual environment that would let us talk on our own terms. Beyond just our own needs, we were interested in reimagining online communication for all groups. We saw how much people were struggling with the loneliness of remote work and school, and we wanted to create a product that would help them reinvigorate their social connections and make new ones all together. We decided that the best solution to this problem would be a new type of social media app that creates an open environment for communities which people can engage with at whatever level suits them. This idea became Clustr. We are excited to begin this journey at HackPrinceton, and we fully intend on developing it into a startup going forward. ## What it does Clustr is a mobile app where friend groups can collectively create customizable communities. Within these communities, multiple conversations take place, each with a set of active participants. Conversations are viewable to everyone, but only notify the included members, reducing the barrier to start a new conversation. Additionally, users can create topics under which they may post conversation starters. This feature allows people to share small moments in their lives as they would in person and helps combat group inactivity. Supplementary features such as event scheduling, a collective white board, and video chat make Clustr an excellent fit for all types of groups. Further from the frontend experience, several machine learning and data driven models improve the user experience and promote authentic group interactions. Clustr contains natural language processing models to suggest group names and cluster similar conversations. These features can aid in conversation discovery. Additional user analytics classify users by their interests, allowing us to display relevant conversations and suggest potential friends. From our preliminary research, Clustr is the only mobile app working on sustaining fluid group conversation. Several platforms exist for direct messaging, and Slack and Discord are our biggest community competitors. We differ from these competitors by providing a vehicle for fluid conversations suitable for friend groups and workplaces alike. ## How we built it Clustr contains multiple interconnected components - a mobile application, a backend server, an NLP suite, and a user analysis simulation. Let's take a look at each: 1. **Mobile Application**: We built the user facing mobile application using React Native on an Expo managed workflow. We utilized Redux to handle state, Firebase for authentication, and Stream Chat API (<https://getstream.io/chat/>) to integrate real-time messaging features. We first created detailed mockups in Figma before translating them into HTML, JavaScript, and styling. 2. **Backend Server**: On the backend, we used FastAPI in Python to communicate with the React Native frontend. We set up a MongoDB Cluster using Atlas hosted on Google Cloud, and designed several endpoints to perform CRUD operations for the following entities: Users, Clusters, Conversations, Pin Boards, and Posts. Detailed documentation was generated using FastAPI and Python function decorators. 3. **NLP Tools**: In addition to the basic CRUD operations, the backend server also handled communication between the mobile application and NLP tools designed to improve the user experience. We used Google Cloud Named Entity Recognition to suggest group names based on past conversation and a custom Python pipeline (involving GenSim, NLTK, TF-IDF, and Latent Semantic Indexing) to compute conversation similarities within a cluster. 4. **User Analysis Simulation**: Additionally, we developed an algorithm to estimate a user's interests given their recent conversation history. We built a simulation using Python, Pandas, SciKit Learn, and MatPlotLib to test the validity of this method under several circumstances. ## Challenges we ran into Our project contained many moving pieces, i.e. the frontend, backend, database, ML models, etc. Creating each individual component and gluing components together required adaptive planning and frequent check-ins from the team. With the frontend, we faced challenges debugging the starter code for the Stream API and having it mesh well with our existing React Native project. On the backend, it was our first time using MongoDB properly, so there was a steep learning curve! We also pivoted several times while deciding the optimal models for the NLP toolkit and user analysis simulation. ## Accomplishments that we're proud of Coming into the hackathon with just an idea, it's great to look back and see just how far we were able to come in 36 hours. Like many programming projects, we got stuck a couple times, most notably while implementing the Stream Chat API. We were also unfamiliar with many of the tools and topics used, but used this as an opportunity to learn and grow. We are proud of our success in designing, developing, and debugging the React Native project such that we can now run Clustr on our own devices. We are excited to build off of our experience at HackPrinceton and continue building Clustr in the future! ## What we learned Working at the edge of our abilities, we all were exposed to new knowledge, whether it be technical or interpersonal. We picked up several new frameworks and languages, including MongoDB, Figma, Stream Chat API, and more. We also gained experience collaborating in realtime on a complex project with many moving parts. GitHub and frequent Zoom calls were critical in avoiding blockers and making consistent progress. Finally, we all learned to communicate, embrace uncertainty, and enjoy the journey in addition to the destination. From another direction, we learned a lot about our idea and refined it several times during the hackathon. As we added more and more impactful features, we realized this project has a scope beyond just a weekend diversion, and could make a positive difference in real people's lives. We are very excited about Clustr and the problems it can solve, and look forward to pursuing this even after HackPrinceton ends. ## What's next for Clustr We believe that Clustr helps to solve some of the very real problems that exist with current social media and methods of online communication. We see a need for our product and know that it has the potential to have a positive impact on the world of virtual communication and relationship building. For these reasons we fully plan on developing Clustr into a startup. We are ready and willing to invest our time into Clustr to bring it to life. We will continue to refine our features, improve our design, and build Clustr into a usable and useful app. Thank you to everyone at HackPrinceton for being a part of our creative process. We cannot wait to continue working on Clustr!
winning
## Inspiration Have you ever attended a networking event and felt overwhelmed by the sheer number of people, unsure of how to find the right connections? We've all been there, wishing for a more streamlined way to meet individuals who align with our goals and values. The inspiration for Aligned.ai comes from this common challenge—finding people who will empower you to do your life's best work. Our goal was to create a system that helps individuals build sustainable and long-lasting relationships in the startup space, whether it’s founder-to-founder or founder-to-VC. ## What it does Aligned.ai is a matchmaking platform designed to connect individuals in the startup ecosystem based on deep personality and goal alignment. Users engage in live, in-depth conversations with Aligned Voice, an AI-powered by Groq, which simulates a real human interaction. The system then generates a unique personality embedding using Cohere, which is stored in Chroma DB. Using powerful vector similarity algorithms, Aligned.ai ranks potential connections, presenting users with the best matches first. This way, users can find those who are not only aligned with their professional aspirations but also resonate with their personal values. ## How we built it Aligned.ai was developed with a focus on seamless integration across multiple technologies: ![Alt text](https://d112y698adiu2z.cloudfront.net/photos/production/software_photos/003/026/013/datas/original.png) * Frontend: Built using Next.js, our frontend integrates Auth0 for tokenization and secure login. * Data Collection: Once logged in, users create their matchmaking profiles, including data from their Web Summit profile, LinkedIn, GitHub, and more. * Conversation with Aligned Voice: This **core feature** was powered by Groq, users engage in a live conversation that feels as natural as talking to a real person. We used Groq's integrations with Whisper for speech to text - then sent this info to our backend server for stateless requests to Groq's blazing fast llama 3 models. **We experimented with various platforms but found Groq to give us the latency necessary for our needs** * Personality Embedding: This was the **bread and butter** of our app. The conversation data is processed by Cohere's Embed API to generate a personality embedding, which is stored in Chroma DB vector database. * Matchmaking: Users can then search for others with similar profiles using vector similarity algorithms. This was all ran over Chroma DB's seamless and powerful interface with multiple integrations. Summaries of similarities are hosted using Groq and Cohere. The system ranks matches from most to least aligned, providing a curated list of potential connections. * Reach Out: Once a match is found, users can reach out directly using the embedded social information. ## Challenges we ran into * One of the main challenges we faced was prompt engineering—ensuring that the AI could generate meaningful and accurate personality embeddings from the conversations. * Additionally, integrating the full tech stack from frontend to backend, while maintaining real-time AI performance, posed significant difficulties. * Balancing the scale of integration with providing a clean and intuitive user experience was another key challenge we successfully navigated. ## Accomplishments that we're proud of * MVP Completion: We successfully built and deployed a minimum viable product that effectively demonstrates the core functionality of Aligned.ai. * Contribution to Open Source: We made a PR to improve one of the technologies we worked with, contributing back to the community. * User-Centric Design: We invested significant time in UI/UX design to ensure that the user experience is as intuitive and enjoyable as possible. ## What we learned Through this project, we learned the immense potential of AI-driven systems to facilitate meaningful connections between individuals. The ability of autonomous agents to understand and simulate human interaction signals a new paradigm in networking and relationship-building. We also gained valuable insights into prompt engineering, real-time AI processing, and the importance of seamless integration across the tech stack. ## What's next for Aligned.ai The potential for Aligned.ai is vast. Moving forward, we plan to expand the action space of personality embedding and vector based search, integrating more powerful features to enhance the matchmaking process. We aim to refine the AI's ability to simulate even more nuanced human interactions and to improve the accuracy of our personality embeddings. As we continue to develop Aligned.ai, our goal is to make it the go-to platform for building meaningful, long-term relationships in the startup ecosystem.
## A bit about our thought process... If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own. That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to! ## What does it even do It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:        **1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!        **2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms ## The Fun Part Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics. One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate! There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created! ## What we learned We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
## Inspiration Have you ever been lying on the bed and simply cannot fall asleep? Statistics show that listening to soothing music while getting to sleep improves sleeping quality because the music helps to release stress and anxiety accumulated from the day. However, do you really want to get off your cozy little bed to turn off the music, or you wanna keep it playing for the entire night? Neither sounds good, right? ## What it does We designed a sleep helper: SleepyHead! This product would primarily function as a music player, but with a key difference: it would be designed specifically to help users fall asleep. SleepyHead connects with an audio device that plays a selection of soft music. It is connected to three sensors that detect the condition of acceleration, sound, and light in the environment. Once SleepyHead detects the user has fallen asleep, it will tell the audio device to get into sleeping mode so that it wouldn’t disrupt the user's sleep. ## How we built it The following are the main components of our product: Sensors: We implement three sensors in SleepyHead: Accelerator to detect movement of the user Sound detector to detect sound made by the user Timer: Give a 20 mins time loop. The data collected by the sensor will be generated every 20 mins. If no activity is detected, SleepyHead tells the Audio device to enter sleeping mode. If activity is detected, SleepyHead will start another 2o min loop. Microcontroller board: We use Arduino Uno as the processor. It is the motherboard of SleepyHead. It connects all the above-mentioned elements and process algorithms. Audio device: will be connected with SleepyHead by Bluetooth. ## Challenges we ran into We had this idea of the sleepyhead to create a sleep speaker that could play soft music to help people fall asleep, and even detect when they were sleeping deeply to turn off the music automatically. But as when we started working on the project, we realized our team didn't get all the electrical equipment and kits we needed to build it. Unfortunately, some of the supplies we had were too old to use, and some of them weren't working properly. It was frustrating to deal with these obstacles, but we didn't want to give up on my idea. As a novice in connecting real products and C++, we struggled to connect all the wires and jumpers to the circuit board, and figuring out how to use the coding language C++ to control all the kits properly and make sure they functioned well was challenging. However, our crew didn't let the difficulties discourage us. With some help and lots of effort, we eventually overcame all the challenges and made our sleepyhead a reality. Now, it feels amazing to see people using it to improve their sleep and overall health. ## Accomplishments that we're proud of One of the most outstanding aspects of the Sleepyhead is that it’s able to use an accelerometer and a sound detector to detect user activity, and therefore decide if the user has fallen asleep based on the data. Also, coupled with its ability to display the time and activate a soft LED light, it’s a stylish and functional addition to any bedroom. These features, combined with its ability to promote healthy sleep habits, make Sleepyhead a truly outstanding and innovative product. ## What we learned Overall, there are many things that we have learned through this hackathon. First, We learned how to discuss the ideas and thoughts with each member more effectively. In addition, we acknowledged how to put our knowledge into practice, and create this actual product in life. Finally, We understand that good innovation can be really useful and essential through the hackathon. Choosing the right direction can save you tons of time. ## What's next for SleepyHead Sleepyhead's next goals are: Gather more feedback from users. This feedback can help sleepyhead determine which functions are feasible and which are not, and can provide insights on what changes or improvements need to be made in sleepyhead. Conduct research. Conducting research can help our product identify trends and gaps in our future potential market. Iterate and test. Once We have a prototype of the product, it is important to iterate and test it to see how it performs in the real world. Stay up to date with industry trends. It's important for us to keep the innovation on top of industry trends and emerging technologies as this can provide SleepyHead with new ideas and insights to improve the product.
partial
## Inspiration We had a chat with the engineers at the Jump booth and brainstormed some ideas, particularly in regards to building an explorer for Wormhole cross-chain events. Additionally, since wormhole has a limit for the amount of value allowed to be bridged in a given time (as an additional protection after the hack earlier this year), we wanted to create an estimator for the likelihood a transaction would succeed. This is important especially for cross-chain arbitrage since transactions that are not sucessfully approved could become irreversibly locked up in wormhole for 24 hours. ## What it does Our project aims to provide greater insight into underlying Wormhole events and help users decide estimate the likihood of their transaction succeeding. Using a spy relay and the public Guardian APIs, we created a system that displays both a real-time tally of Guardian signatures and an success estimator for transactions that takes in chainid and notional value. The real-time graph displays a rolling-window live tally of the Guardians that have signed the past 1000 VAAs. This way, users can easily visualize network reliability and see that the guardians with little to no recent transactions are probably offline. Additionally, by inputting a notional amount and chain id, users can also estimate if their transaction will be successful based on guardian availablity and single transaction notional amount. ## How we built it We started by building an MVP for the transaction success estimator and concurrently worked on getting guardian data feeds directly from the gossip network. We ran a spy relay in a Digital Ocean droplet, which we used to provide a live VAA broadcast feed to our backend via RPC. Our backend then provides the frontend, which we built in React, with a REST endpoint to query the latest VAA signature counts on a rolling window basis. The information required for our transaction success estimator was pulled in from the public endpoints from available guardians. ## Challenges we ran into Cross-chain Wormhole transactions must be signed off by 13 of 19 "guardians" to be considered valid. Thus, we needed information from all 19 guardians to reliably predict whether or not cross-chain transactions will fail. However, only 7 of the 19 guardians provide public APIs, so the rest needed to be collected directly from the Gossip network that the guardians broadcast to. This posed a significant challenge for us, since the current "spy" relay implemtation supports only listening for VAA (Verified Action Approvals) and not the Heartbeat events that we were interested in. We compromised by using only the publically available APIs for the transaction success estimator and supplementing it with live feeds from the VAA spy relay. ## Future We managed to get something useful(maybe?) out there for everyone using the wormhole bridge. Also, we managed to sneak in another hack overnight and in while we ran into issues with both the lack of documentation on the wormhole + guardian code. In the future, we plan on adding additional functionality to the network if we see people use the mvp. There were things like mempool monitoring and gossip networks that could be done if there was more time.
#### For Evaluators/Sponsors Scroll down for a handy guide to navigating our repository and our project's assets. ## 💥 - How it all started Cancer is a disease that has affected our team quite directly. All of our team members know a relative or loved one that has endured or lost their life due to cancer. This makes us incredibly passionate about wanting to improve cancer patient care. We identi fied a common thread of roadblocks that our loved ones went through during their journey through their diagnosis/treatment/etc: * **Diagnosis and Staging:** Properly diagnosing the type and stage of cancer is essential for determining the most appropriate treatment plan. * **Treatment Options:** There are many different types of cancer, and treatment options can vary widely. Selecting the most effective and appropriate treatment for an individual patient can be challenging. * **Multidisciplinary Care:** Coordinating care among various healthcare professionals, including oncologists, surgeons, radiologists, nurses, and others, can be complex but is necessary for comprehensive cancer care. * **Communication:** Effective communication among patients, their families, and healthcare providers is crucial for making informed decisions and ensuring the patient's needs and preferences are met. ## 📖 - What it does We built Cancer360° to create a novel, multimodal approach towards detecting and predicting lung cancer. We synthesized four modes of data qualitative (think demographics, patient history), image (of lung CT scans), text (collected by an interactive chatbot), and physicical (via the ZeppOS Smartwatch) with deep learning frameworks and large language models to copmute a holistic metric for patient likelihood of lung cancer. Through this data-driven approach, we aim to address what we view as "The BFSR": The 'Big Four' of Surmountable Roadblocks: * **Diagnosis:** Our diagnosis system is truly multimodal through our 4 modes: quantitative (uses risk factors, family history, demographics), qualitative (analysis of medical records like CT Scans), physical measurements (through our Zepp OS App), and our AI Nurse. * **Treatment Options:** Our nurse can suggest multiple roadmaps of treatment options that patients could consider. For accessibility and ease of understanding, we created an equivalent to Google's featured snippets when our nurse mentions treatment options or types of treatment. * **Multidisciplinary Care:** The way Cancer360° has been built is to be a digital aid that bridges the gaps with the automated and manual aspects of cancer treatment. Our system prompts patients to enter relevant information for our nurse to analyze and distribute what's important to healthcare professionals. * **Communication:** This is a major need for patients and families in the road to recovery. Cancer360's AI nurse accomplishes this through our emotionally-sensitive responses and clear/instant communication with patients that input their information, vitals, and symptoms. ## 🔧 - How we built it To build our Quantitative Mode, we used the following: * **numpy**: for general math and numpy.array * **Pandas**: for data processing, storage * **SKLearn**: for machine learning (train\_test\_split, classification\_report) * **XGBoost**: Extreme Boosting Trees Decision Trees To build our Qualitative Mode, we used the following: * **OpenCV** and **PIL** (Python Imaging Library): For Working With Image Data * **MatPlotLib** and **Seaborn** : For Scientific Plotting * **Keras**: Image Data Augmentation (think rotating and zooming in), Model Optimizations (Reduce Learning Rate On Plateau) * **Tensorflow**: For the Convolutional Neural Network (CNN) To build our AI Nurse, we used the following: * **Together.ai:** We built our chatbot with the Llama2 LLM API and used tree of thought prompt engineering to optimize our query responses To build the portal, we used the following: * **Reflex:** We utilized the Reflex platform to build our entire frontend and backend, along with all interactive elements. We utilized front end components such as forms, buttons, progress bars, and more. More importantly, Reflex enabled us to directly integrate python-native applications like machine learning models from our quantitative and qualitative modes or our AI Nurse directly into the backend. ## 📒 - The Efficacy of our Models **With Quantitative/Tabular Data:** We collected quantitative data for patient demographic, risk factors, and history (in the form of text, numbers, and binary (boolean values)). We used a simple keyword search algorithm to identify risk keywords like “Smoking” and “Wheezing” to transform the text into quantitative data. Then we aggregated all data into a single Pandas dataframe and applied one-hot-encoding on categorical variables like gender. We then used SKLearn to create a 80-20 test split, and tested various models via the SKLearn library, including Logistic Regression, Random Forest, SVM, and K Nearest Neighbors. We found that ultimately, XGBoost performed best with the highest 98.39% accuracy within a reasonable 16-hour timeframe. Our training dataset was used in a research paper and can be accessed [here.](https://www.kaggle.com/datasets/nancyalaswad90/lung-cancer) This high accuracy speaks to the reliability of our model. However, it's essential to remain vigilant against overfitting and conduct thorough validation to ensure its generalizability, a testament to our commitment to both performance and robustness. [View our classification report here](https://imgur.com/a/YAvXwyk) **With Image Data:** Our solution is well-equipped to handle complex medical imaging tasks. Using data from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset, and deep learning frameworks from tensorflow and keras, we were able to build a convolution neural network to classify patient CT scans as malignant or benign. Our convolutional neural network was fine-tuned for binary image classification of 512x512 RGB images, with multiple convolutional, max-pooling, normalization, and dense layers, compiled using the Adam optimizer and binary crossentropy loss. We also used OpenCV, PIL, Matplotlib, and Numpy to deliver a commendable 93% accuracy over a 20-hour timeframe. The utilization of dedicated hardware resources, such as Intel developer cloud with TensorFlow GPU, accelerates processing by 24 times compared to standard hardware. While this signifies our technological edge, it's important to acknowledge that image classification accuracy can vary based on data quality and diversity, making the 93% accuracy an achievement that underscores our commitment to delivering high-quality results. [Malignant CT Scan](https://imgur.com/a/8oGYz71) [Benign CT Scan](https://imgur.com/a/3X3zb7k) **AI Nurse:** The AI Nurse powered by Together.ai and LLMs (such as Llama2) introduces an innovative approach to patient interaction and risk factor determination. Generating "trees of thoughts" showcases our ability to harness large language models for effective communication. Combining multiple AI models to determine risk factor percentages for lung cancer demonstrates our holistic approach to healthcare support. However, it's essential to acknowledge that the efficacy of this solution is contingent on the quality of language understanding, data processing, and the integration of AI models, reflecting our dedication to continuous improvement and fine-tuning. ## 🚩 - Challenges we ran into * Challenges we fixed: * Loading our neural network model into the Reflex backend. After using keras to save the model as a “.h5” extension, we were able to load and run the model locally on my jupyter notebook, however when we tried to load it in the Reflex backend, we kept getting a strange Adam optimizer build error. we tried everything: saving the model weights separately, using different file extensions like .keras, and even saving the model on as a .json file. Eventually, we realized that this was a [known issue with m1/m2 macs and tensorflow](https://github.com/tensorflow/tensorflow/issues/61915) * Fixed the Get Started Button in Reflex header (Issue: button wouldn’t scale to match the text length) - Moved the button outside the inner hstack, but still the outer hstack * Integrating together ai chatbot model into Reflex: A lot of our time was spent trying to get the integration working. * Challenges we didn’t fix: + Left aligning the AI response and right aligning the user input in the chatbot + Fine tuning a second model to predict lung cancer rate from the chatbot responses from the first model - Could not get enough training data, too computationally taxing and few shot learning did not produce results + Fixing bugs related to running a virtual javascript environment within Python via PyV8 ## 🏆 - Accomplishments that we're proud of * Going from idea generation to working prototype, with integration of 4 data modalities - Qualitative Mode, Quantitative Mode, and our AI Nurse, and Smartwatch Data, within the span of less than two days * Integrating machine learning models and large language models within our application in a way that is directly accessible to users * Learning a completely new web development framework (Reflex) from scratch without extensive documentation and ChatGPT knowledge * Working seamlessly as a team and take advantage of the component-centered nature of Reflex to work independently and together ## 📝 - What we learned * Ameya: "I was fortunate enough to learn a lot about frameworks like Reflex and Together.ai." * Marcus: "Using Reflex and learning its components to integrate backend and frontend seamlessly." * Timothy: "I realized how I could leverage Reflex, Intel Developer Cloud, Together.ai, and Zepp Health to empower me in developing with cutting edge technologies like LLMs and deep learning models." * Alex: "I learned a lot of front end development skills with Reflex that I otherwise wouldn’t have learned as a primarily back-end person." ## ✈️ - What's next for Cancer360° Just like how a great trip has a great itinenary, we envision Cancer360° future plans in phases. #### Phase 1: Solidifying our Roots Phase 1 involves the following goals: * Revamping our user interface to be more in-line with our mockups * Increasing connectivity with healthcare professionals #### Phase 2: Branching Out View the gallery to see this. Phase 2 involves the following goals: * Creating a mobile app for iOS and Android of this service * Furthering development of our models to detect and analyze other types of cancers and create branches of approaches depending on the cancer * Completing our integration of the physical tracker on Zepp OS #### Phase 3: Big Leagues Phase 3 involves the following goals: * Expanding accessibility of the app through having our services be available in numerous different languages * Working with healthcare institutions to further improve the usability of the suite ## 📋 - Evaluator's Guide to Cancer360° ##### Intended for judges, however the viewing public is welcome to take a look. Hey! We wanted to make this guide in order to help provide you further information on our implementations of certain programs and provide a more in-depth look to cater to both the viewing audience and evaluators like yourself. #### Sponsor Services We Have Used This Hackathon ##### Reflex The founders (Nikhil and Alex) were not only eager to assist but also highly receptive to our feedback, contributing significantly to our project's success. In our project, we made extensive use of Reflex for various aspects: * **Project Organization and Hosting:** We hosted our website on Reflex, utilizing their component-state filesystem for seamless project organization. * **Frontend:** We relied on Reflex components to render everything visible on our website, encompassing graphics, buttons, forms, and more. * **Backend:** Reflex states played a crucial role in our project by facilitating data storage and manipulation across our components. In this backend implementation, we seamlessly integrated our website features, including the chatbot, machine learning model, Zepp integration, and X-ray scan model. ##### Together AI In our project, Together AI played a pivotal role in enhancing various aspects: * **Cloud Service:** We harnessed the robust capabilities of Together AI's cloud services to host, run, and fine-tune llama 2, a Large Language Model developed by META, featuring an impressive 70 billion parameters. To ensure seamless testing, we evaluated more than ten different chat and language models from various companies. This was made possible thanks to Together AI's commitment to hosting over 30 models on a single platform. * **Integration:** We seamlessly integrated Together AI's feature set into our web app, combined with Reflex, to deliver a cohesive user experience. * **Tuning:** Leveraging Together AI's user-friendly hyperparameter control and prompt engineering, we optimized our AI nurse model for peak performance. As a result, our AI nurse consistently generated the desired outputs at an accelerated rate, surpassing default performance levels, all without the need for extensive tuning or prompt engineering. ##### Intel Developer Cloud Our project would not have been possible without the massive computing power of Intel cloud computers. For reference, [here is the CNN training time on my local computer.](https://imgur.com/a/rfYlVro) And here is the [CNN training time on my Intel® Xeon 4th Gen ® Scalable processor virtual compute environment and tensorflow GPU.](https://imgur.com/a/h3ctSPY) A remarkable 20x Speedup! This huge leap in compute speed empowered by Intel® cloud computing enabled us to re-train our models with lightning speed as we worked to debugg and worked to integrate it into our backend. It also made fine-tuning our model much easier as we can tweak the hyperparameters and see their effects on model performance within the span of minutes. ##### Zepp Health We utilized the ZeppOS API to query real-time user data for calories burned, fat burned, blood oxygen, and PAI (Personal Activity Index). We worked set up a PyV8 virtual javascript environment to run javascript code within Python to integrate the ZeppOS API into our application. Using collected data from the API, we used an ensemble algorithm to compute a health metric evaluating patient health, which ultimately feeds into our algorithm to find patient risk for lung cancer. ##### GitHub We used GitHub for our project by creating a GitHub repository to host our hackathon project's code. We also ensured that our use of GitHub stood out with a detailed ReadMe page, meaningful pull requests, and a collaboration history, showcasing our dedication to improving cancer patient care through Cancer360°. We leveraged GitHub not only for code hosting but also as a platform to collaborate, push code, and receive feedback. ##### .Tech Domains We harnessed the potential of a .tech domain to visually embody our vision for Cancer360°, taking a step beyond traditional domains. By registering the website helpingcancerpatientswith.tech, we not only conveyed our commitment to innovative technology but also made a memorable online statement that reflects our dedication to improving the lives of cancer patients.
[Project Link](http://blockchainvisualization.github.io) ## Inspiration I've always been very interested in different ways to represent data. When I found out that blockchain offered a websocket and data API which could give real-time bitcoin data I knew I had to do something with it. ## What it does The blockchain visualization map utilizes real-time bitcoin transaction data. This data is parsed and analyzed to create a map of active bitcoin nodes throughout the world while also accumulating statistics such as transactions per second and largest recent transaction amount. ## How I built it I used Bootstrap/HTML for my front end and Javascript/node.js for my back end. Most of the app's processing is on the client-side javascript so the load on the server is very minimal. I used node.js for forwarding certain API requests that the client-side javascript couldn't handle too well. The map is powered by the Google Maps API and the IP to latitude/longitude transformation is powered by the [ipinfo.io](http://ipinfo.io) API. ## Challenges I ran into The biggest challenge I ran into was getting all the different APIs to cooperate together. I ended up having to move one of the APIs to a node.js server to get everything to work. ## Accomplishments that I'm proud of I'm proud of how polished the "final" product looks. I don't have much front end experience, so to see something come together that is both functional and aesthetically pleasing is very rewarding. ## What I learned Callbacks, usually caused by the various APIs, made it a lot harder for me to maintain a nice flow throughout my app. I had to change a lot of my usual coding practices to better fit the languages and context I was coding in. ## What's next for Blockchain Visualization Map My first priority would be to expand upon the statistics sidebar. The amount of raw data that the blockchain API provides is perfect for data visualization and statistics. Hopefully in the future this web app will be able to serve as an educational resource for anyone interested in bitcoin.
winning
## Inspiration Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week. ## What it does IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application. ## How WE built it on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time. ## Challenges WE ran into hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck. To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult. Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue. The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful.
## Inspiration Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use. ## What it does and how we built it The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well. ## Challenges we ran into This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem. ## Accomplishments that we're proud of Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device. ## What's next for PulseGrip * working to improve sensors, adding more games, seeing how we can help people We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here. ## Check it out <https://hacks.pulsegrip.design> <https://github.com/PulseGrip>
## Inspiration Physiotherapy is an important part of rehabilitation for physical injuries. A physiotherapist helps restore a patient’s mobility and prevents future injuries. However, physiotherapy can be too expensive and time-consuming to repeatedly attend sessions at the clinic. Also, because of the pandemic, many Canadians were not able to regularly attend physiotherapy sessions because of limited service. Enter Smart Heal! A low-cost, portable, and easy to use personal device that acts as your own virtual physiotherapist. ## What it does Smart Heal gives you instructions and personally guides you through physiotherapy exercises in a fun and intuitive manner from the comfort of your home. The system consists of a Raspberry Pi and an IMU Sensor which is used to track the movement and orientation of any injured body part. Smart Heal also comes with a desktop app which I made to render a user’s current wrist orientation in a 3D virtual environment in real time. The exercises are gamified to enhance the user experience. For example, I provide virtual targets that a user could try to hit and by doing that they end up moving their wrist to a proper exercise position. With your wrist, you can pretend you are controlling an airplane through different maneuvers that actually help exercise your wrist. All while being fun for all ages! ## How we built it The Raspberry Pi is the central hub which is responsible for processing all sensor data. The Pi interfaces with the IMU and runs a Kalman Filter. The Kalman Filter, which I specifically tuned, makes sure that the raw IMU data is as smooth as can be. With the filter turned on, jerky hand movements are stabilized in the 3D visualizer. The desktop visualizer app is built to display the IMU orientation feedback in the virtual environment. Once the desktop app is initialized, the app will automatically seek the Smart Heal device (Raspberry Pi) on the home wifi network and automatically connect with the IMU data stream. This communication backend is handled by ROS (Robot Operating System). Once the IMU data stream is received, the app will run through each required step of a specific wrist exercise (update text instructions and colours for the targets).
winning
## Inspiration (<http://televisedrevolution.com/wp-content/uploads/2015/08/mr_robot.jpg>) If you watch Mr. Robot, then you know that the main character, Elliot, deals with some pretty serious mental health issues. One of his therapeutic techniques is to write his thoughts on a private journal. They're great... they get your feelings out, and acts as a point of reference to look back to in the future. We took the best parts of what makes a diary/journal great, and made it just a little bit better - with Indico. In short, we help track your mental health similar to how FitBit tracks your physical health. By writing journal entries on our app, we automagically parse through the journal entries, record your emotional state at that point in time, and keep an archive of the post to aggregate a clear mental profile. ## What it does This is a FitBit for your brain. As you record entries about your live in the private journal, the app anonymously sends the data to Indico and parses for personality, emotional state, keywords, and overall sentiment. It requires 0 effort on the user's part, and over time, we can generate an accurate picture of your overall mental state. The posts automatically embeds the strongest emotional state from each post so you can easily find / read posts that evoke a certain feeling (joy, sadness, anger, fear, surprise). We also have a analytics dashboard that further analyzes the person's longterm emotional state. We believe being cognizant of one self's own mental health is much harder, and just as important as their physical health. A long term view of their emotional state can help the user detect sudden changes in the baseline, or seek out help & support long before the situation becomes dire. ## How we built it The backend is built on a simple Express server on top of Node.js. We chose React and Redux for the client, due to its strong unidirectional data flow capabilities, as well as the component based architecture (we're big fans of css-modules). Additionally, the strong suite of redux middlewares such as sagas (for side-effects), ImmutableJS, and reselect, helped us scaffold out a solid, stable application in just one day. ## Challenges we ran into Functional programming is hard. It doesn't have any of the magic that two-way data-binding frameworks come with, such as MeteorJS or AngularJS. Of course, we made the decision to use React/Redux being aware of this. When you're hacking away - code can become messy. Functional programming can at least prevent some common mistakes that often make a hackathon project completely unusable post-hackathon. Another challenge was the persistance layer for our application. Originally, we wanted to use MongoDB - due to our familiarity with the process of setup. However, to speed things up, we decided to use Firebase. In hindsight, it may have cause us more trouble - since none of us ever used Firebase before. However, learning is always part of the process and we're very glad to have learned even the prototyping basics of Firebase. ## Accomplishments that we're proud of * Fully Persistant Data with Firebase * A REAL, WORKING app (not a mockup, or just the UI build), we were able to have CRUD fully working, as well as the logic for processing the various data charts in analytics. * A sweet UI with some snazzy animations * Being able to do all this while having a TON of fun. ## What we learned * Indico is actually really cool and easy to use (not just trying to win points here). Albeit it's not always 100% accurate, but building something like this without Indico would be extremely difficult, and similar apis I've tried is not close to being as easy to integrate. * React, Redux, Node. A few members of the team learned the expansive stack in just a few days. They're not experts by any means, but they definitely were able to grasp concepts very fast due to the fact that we didn't stop pushing code to Github. ## What's next for Reflect: Journal + Indico to track your Mental Health Our goal is to make the backend algorithms a bit more rigorous, add a simple authentication algorithm, and to launch this app, consumer facing. We think there's a lot of potential in this app, and there's very little (actually, no one that we could find) competition in this space.
## Inspiration Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it. ## What it does We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels. The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading. We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing. Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article. ## How we built it **Frontend** We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend. **Backend** ## Challenges we ran into It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it. ## Accomplishments that we're proud of 1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment 2) (co:here) Developed a high-performing classification model to classify news articles by topic 3) Spun up a cockroach db node and client and used it to store all of our classification data 4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema. 5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content. 6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding. ## What we learned 1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users 2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience. 3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case. ## What's next for goodNews 1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not. 2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
## 💡 Inspiration We got inspiration from or back-end developer Minh. He mentioned that he was interested in the idea of an app that helped people record their positive progress and showcase their accomplishments there. This then led to our product/UX designer Jenny to think about what this app would target as a problem and what kind of solution would it offer. From our research, we came to the conclusion quantity over quality social media use resulted in people feeling less accomplished and more anxious. As a solution, we wanted to focus on an app that helps people stay focused on their own goals and accomplishments. ## ⚙ What it does Our app is a journalling app that has the user enter 2 journal entries a day. One in the morning and one in the evening. During these journal entries, it would ask the user about their mood at the moment, generate am appropriate response based on their mood, and then ask questions that get the user to think about such as gratuity, their plans for the day, and what advice would they give themselves. Our questions follow many of the common journalling practices. The second journal entry then follows a similar format of mood and questions with a different set of questions to finish off the user's day. These help them reflect and look forward to the upcoming future. Our most powerful feature would be the AI that takes data such as emotions and keywords from answers and helps users generate journal summaries across weeks, months, and years. These summaries would then provide actionable steps the user could take to make self-improvements. ## 🔧 How we built it ### Product & UX * Online research, user interviews, looked at stakeholders, competitors, infinity mapping, and user flows. * Doing the research allowed our group to have a unified understanding for the app. ### 👩‍💻 Frontend * Used React.JS to design the website * Used Figma for prototyping the website ### 🔚 Backend * Flask, CockroachDB, and Cohere for ChatAI function. ## 💪 Challenges we ran into The challenge we ran into was the time limit. For this project, we invested most of our time in understanding the pinpoint in a very sensitive topic such as mental health and psychology. We truly want to identify and solve a meaningful challenge; we had to sacrifice some portions of the project such as front-end code implementation. Some team members were also working with the developers for the first time and it was a good learning experience for everyone to see how different roles come together and how we could improve for next time. ## 🙌 Accomplishments that we're proud of Jenny, our team designer, did tons of research on problem space such as competitive analysis, research on similar products, and user interviews. We produced a high-fidelity prototype and were able to show the feasibility of the technology we built for this project. (Jenny: I am also very proud of everyone else who had the patience to listen to my views as a designer and be open-minded about what a final solution may look like. I think I'm very proud that we were able to build a good team together although the experience was relatively short over the weekend. I had personally never met the other two team members and the way we were able to have a vision together is something I think we should be proud of.) ## 📚 What we learned We learned preparing some plans ahead of time next time would make it easier for developers and designers to get started. However, the experience of starting from nothing and making a full project over 2 and a half days was great for learning. We learned a lot about how we think and approach work not only as developers and designer, but as team members. ## 💭 What's next for budEjournal Next, we would like to test out budEjournal on some real users and make adjustments based on our findings. We would also like to spend more time to build out the front-end.
winning
## Inspiration We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool. ## What it does AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures. The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch. ## How we built it In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set. We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features. ## Challenges we ran into We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time. It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in. ## Accomplishments that we're proud of It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected. ## What we learned All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new! ## What's next for AirTunes The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
## Inspiration: We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world. ## What it does: Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input. ## How we built it: Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals. Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino. We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output. ## Challenges we ran into: We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input. ## Accomplishments that we're proud of We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma. ## What we learned We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner. ## What's next for Voltify Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
## Inspiration Just Dance was probably my favorite game as a kid. Not only because I got to listen to Psy a million times a day, but because I loved to run and hop and dance around. At this hackathon, it was an inspiration for us to try and build something similar for everyone: a pose guiding app that helped people get active. ## What it does Yoga Yogi is the everyman’s yoga guide. Using pose recognition, Yoga Yogi recognizes your yoga poses and gives you individualized feedback on how to improve! Not only do we feature a login, you can design your very own yoga routine individualized to poses and duration goals. Once you are ready to start, Yoga Yogi will walk you through a guided yoga session, where you will receive live feedback and helpful tips. ## How we built it Our project connected multiple moving parts, including a React frontend, a pose recognition model, and a Flask backend. Using Flask had huge importance in our project as it connected our Voiceflow and AI backend to our frontend React elements. Voiceflow was fun to use as it was very easy to create user help agents as well as AI responses specific for programmed API calls. Collecting our data and training our model took several steps. To collect our data, we used mediapipe and cv2 to identify the major nodes of people in yoga poses and transformed these into graphs. We then used this dataset to train a graph neural network, which allowed us to classify pose images into distinct classes. Lastly, we used Voiceflow to generate customized feedback based on real time video. ## Challenges we ran into Since this was our first time working together collaboratively on Github, we struggled quite a lot with figuring out what merge meant and what branches were and whatever rebase meant. But after we got the hang of that, the next obstacle we faced was smoothly running the front and backend together, but not because of technical difficulties, but because of our difficulties with communicating with each other. All of us had unique skill sets, and we oftentimes struggled with communicating what our programs needed and outputted. This slowed us down a considerable amount, but despite this, we were able to make a project that was greater than our sum. ## Accomplishments that we're proud of Sean: As a returner hacker, I am very happy that this project was quite functional and was impressed with all the stuff we were able to fit in these few days utilizing the right planning, skills and even a bit of luck. I'm also glad that I played a crucial part in the project as well as being able to help the team in countless ways through my experience. I feel that we worked very well as a team and really appreciate HTN for giving me the chance to learn and develop another amazing project! Chris: I’m really happy with how simple and streamlined our UI is! It feels like a real app you would find online. This was the first time for me to work in a large group at a sprint pace, and although at some times I felt really tired, I really enjoyed my experience and would come again. Steven: I am really happy with the learning progress I have made during the 36 hours. I had almost no experience coming in with web design and with using React and other frameworks. I was able to learn so much on the spot and reflecting on the past 2 days of work, I feel accomplished and thankful for this opportunity. I want to also thank my team members for helping me with so much and teaching me new pieces of coding knowledge that I would not have known. Nirvan: I am very proud of the team effort and collaboration that we had the entire time. We worked really hard and supported each other whenever we had an error particularly those between the react frontend and flask backend. I am particularly proud of overcoming the issue of having multiple setIntervals in different components by passing props to a parent component and having one common useEffect to run all the setIntervals. ## What we learned A lot of front end and APIs! While the more technical parts are usually documented thoroughly, making a clean and satisfying frontend that we liked was much more creatively difficult than we thought. We each all had our own different thoughts and ideas about what we liked and what we wanted to put in, but making a project we were all proud of meant that we had to compromise on things that we didn’t want. ## What's next for Yoga Yogi We want to put Yoga Yogi on phones! This would allow Yoga Yogi to guide you through his practices, anytime, and anywhere. We also want to train Yoga Yogi on more poses, allowing Yoga Yogi to give you step by step feedback rather than broader statements.
winning
## Check out our site -> [Saga](http://sagaverse.app) ## Inspiration There are few better feelings in the world than reading together with a child that you care about. “Just one more story!” — “I promise I’ll go to bed after the next one” — or even simply “Zzzzzzz” — these moments forge lasting memories and provide important educational development during bedtime routines. We wanted to make sure that our loved ones never run out of good stories. Even more, we wanted to create a unique, dynamic reading experience for kids that makes reading even more fun. After helping to build the components of the story, kids are able to help the character make decisions along the way. “Should Balthazar the bear search near the park for his lost friend? or should he look in the desert?” These decisions help children learn and develop key skills like decisiveness and action. The story updates in real time, ensuring an engaging experience for kids and parents. Through copious amounts of delirious research, we learned that children can actually learn better and retain more when reading with parents on a tablet. After talking to 8 users (parents and kiddos) over the course of the weekend, we defined our problem space and set out to create a truly “Neverending Story.” ## What it does Each day, *Saga* creates a new, illustrated bedtime story for children aged 0-7. Using OpenAI technology, the app generates and then illustrates an age and interest-appropriate story based on what they want to hear and what will help them learn. Along the way, our application keeps kids engaged by prompting decisions; like a real-time choose-your-own-adventure story. We’re helping parents broaden the stories available for their children — imprinting values of diversity, inclusion, community, and a strong moral compass. With *Saga*, parents and children can create a universe of stories, with their specific interests at the center. ## How we built it We took an intentional approach to developing a working MVP * **Needs finding:** We began with a desire to uncover a need and build a solution based on user input. We interviewed 8 users over the weekend (parents and kids) and used their insights to develop our application. * **Defined MVP:** A deployable application that generates a unique story and illustrations while allowing for dynamic reader inputs using OpenAI. We indexed on story, picture, and educational quality over reproducibility. * **Tech Stack:** We used the latest LLM models (GPT-3 and DALLE-2), Flutter for the client, a Node/Express backend, and MongoDB for data management * **Prompt Engineering:** Finding the limitations of the underlying LLM technology and instead using Guess and check until we narrowed down the prompt to produce to more consistent results. We explored borderline use cases to learn where the model breaks. * **Final Touches:** Quality control and lots of tweaking of the image prompting functionality ## Challenges we ran into Our biggest challenges revolved around fully understanding the power of, and the difficulties stemming from prompt generation for OpenAI. This struggle hit us on several different fronts: 1. **Text generation** - Early on, we asked for specific stories and prompts resembling “write me a 500-word story.” Unsurprisingly, the API completely disregarded the constraints, and the outputs were similar regardless of how we bounded by word count. We eventually became more familiar with the structure of quality prompts, but we hit our heads against this particular problem for a long time. 2. **Illustration generation** - We weren’t able to predictably write OpenAI illustration prompts that provided consistently quality images. This was a particularly difficult problem for us since we had planned on having a consistent character illustration throughout the story. Eventually, we found style modifiers to help bound the problem. 3. **Child-safe content** - We wanted to be completely certain that we only presented safe and age-appropriate information back to the users. With this in mind, we built several layers of passive and active protection to ensure all content is family friendly. ## What we learned So many things about OpenAI! 1. Creating consistent images using OpenAI generation is super hard, especially when focusing on one primary protagonist. We addressed this by specifically using art styles to decrease the variability between images. 2. GPT-3's input / output length limitations are much more stringent than ChatGPT's -- this meant we had to be pretty innovative with how we maintained the context over the course of 10+ page stories. 3. How to reduce overall response time while using OpenAI's API, which was really important when generating so many images and using GPT-3 to describe and summarize so many things. 4. Simply instructing GPT to not do something doesn’t seem to work as well as carefully crafting a prompt of behavior you would like it to model. You need to trick it into thinking it is someone or something -- from there, it will behave. ## Accomplishments that we're proud of We’re super excited about what we were able to create given that this is the first hackathon for 3 of our team members! Specifically, we’re proud of: * Developing a fun solution to help make learning engaging for future generations * Solving a real need for people in our lives * Delivering a well-scoped and functional MVP based on multiple user interviews * Integrating varied team member skill sets from barely technical to full-stack ## What's next for Saga ### **Test and Iterate** We’re excited to get our prototype project in the hands of users and see what real-world feedback looks like. Using this customer feedback, we’ll quickly iterate and make sure that our application is really solving a user need. We hope to get this on the App Store ASAP!! ### **Add functionality** Based on the feedback that we’ll receive from our initial MVP, we will prioritize additional functionality: **Reading level that grows with the child** — adding more complex vocabulary and situations for a story and character that the child knows and loves. **Allow for ongoing universe creation** — saving favorite characters, settings, and situations to create a rich, ongoing world. **Unbounded story attributes** — rather than prompting parents with fixed attributes, give an open-ended prompt for more control of the story, increasing child engagement **Real-time user feedback on a story to refine the prompts** — at the end of each story, capture user feedback to help personalize future prompts and stories. ### **Monetize** Evaluate unit economics and determine the best path to market. Current possible ideas: * SaaS subscription based on one book per day or unlimited access * Audible tokens model to access a fixed amount of stories per month * Identify and partner with mid-market publishers to license IP and leverage existing fan bases * Whitelabel the solution on a services level to publishers who don’t have a robust engineering team ## References <https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00677/full>
## Inspiration Our team is made of story lovers. For as long as we can remember, we have been seeking to consume and create stories wherever we go. As we discovered our vastly different reading methodologies due to dyslexia, varying native languages, and differentiated learning experiences, we sought to create a platform that allows all adventurers to immerse themselves in stories. Journey was created from a desire to enhance users’ reading experiences from the very beginning: childhood. During our ideation process, our research indicated two major findings: 1) Children who were regularly read to showed significantly higher levels of reading comprehension in developing years. 2) Time spent reading decreases as children age because they expect to comprehend harder-to-read stories less. This is especially prevalent when children are transitioning from picture-books to novels. As we contemplated ways to immerse children in worlds of literature, the idea of integrating AI was presented as a powerful catalyst for engagement. Educators could use this tool to bring stories to life and allow students to clarify their questions first-hand, thus transcending traditional reading methods and innovating the educational experience. Calling from our personal experiences, our early ideas involved using AI generated text-to-speech to deliver literature to students struggling with dyslexia or English as a second language. However, after many rounds of idea and product development, we are proud to deliver much more than simple text-to-speech. We are proud to have implemented our personal stories into aiding others in developing their own stories as well. ## What it does Journey enables readers to interact live-time with book characters as they explore stories. Customizable conversations allow users to gain explicit insights regarding plot, settings, and even characters’ opinions of each other. Journey’s interactable characters are unlocked in time with their introduction in book—in other words, readers get to meet characters such as Ron Weasley at the same time as Harry Potter! Journey’s mythical interface and speech-to-text features create an exciting visual and auditory experience. Watch as settings magically appear as you read, listen to Journey’s storyteller weave your favourite tales, and experience the beauty of a great book. Journey will break down daunting chunks of text into bite-sized pieces and allow users to enjoy a comprehensive reading experience. By simply uploading a PDF book file, you can start your reading journey! ## How we built it Development started by fine turning AI prompts based of story context, who the user is roleplaying, and who the character is to perfectly match the kind of response we had in mind. We aimed to generate creative responses that conveyed information relevant up the point in the story the reader is at while mimicking emotions of the character the AI is portraying. An omniscient narrator is also available for questioning by the user. The narrator knows everything about the story and provides and unbiased an objective view on the story. This can be used for clarification or a brief summary of the story. To gather the material, novels were obtained in pdf form and were parsed through and split into pages. The pages are displayed main application interface, providing the opportunity interactively parse through the pages in the list. The AI intelligently knows where the reader is when selecting a page based on its content; thus, only allowing the user to talk to a character that has been introduced up to that page. Journey’s visual element was brought to life with the aid of Open AI’s Dall-E, an image generation tool. Each character and page background has a unique generation made by the AI through a simple API call. Each avatar is displayed next their character under the ‘Character Name’ section. ## Challenges we ran into * Integrating each feature and tool we used into one cohesive product (TTS, character generation, string parsing, character responses, and background art generation) * Staying conscious of our budget while creating API requests to test our code * Preventing characters from spoiling later parts of the book or breaking the fourth wall when asking them questions * Training the AI to tailor responses to emotions based off characters in the book depending on who they’re interacting with * Matching the different TTS voices OpenAI offers depending on the character they’re narrating as * Staying hydrated and healthy while racing to meet the 36-hour deadline! ## Accomplishments that we're proud of * This was Anthony Botticchio, Larry Han, and Claire Hu’s first ever hackathon and Claire’s first-time programming! * Elements in our Logo and UI/UX were 100% drawn by hand using vector art on Figma! We wanted to follow an RPG game style theme, so custom making everything was the only way to go. * One of our team members has personally faced learning disabilities, so creating a solution for firsthand problems was incredibly fulfilling. We believe this experience gave us a deeper understanding of the current pain points, allowing us to craft a tailored solution for the problem space we identified. ## What we learned * Hackathons are not easy…one hour seems to go by in fifteen minutes! * OpenAI is extremely powerful, however tremendous training is needed to make it robust * Grouping designs in Figma is essential—otherwise your frames will get very messy very fast! ## What's next for Journey We hope to continue developing Journey and helping students learn and enjoy their reading experiences. As the problem space we chose to tackle is close to our hearts, we truly believe others struggling with similar experiences deserve the chance to explore wonderful stories in an easier fashion than we have. Reading is instrumental in the development of children, and we hope that by continuing to develop Journey we can give a generation of children a love for reading! ## References: The Washington Post. (2015, April 29). Why kids lose interest in reading as they get older. <https://www.washingtonpost.com/news/answer-sheet/wp/2015/04/29/why-kids-lose-interest-in-reading-as-they-get-older/> Reading Rockets. (n.d.). Why some children have difficulties learning to read. <https://www.readingrockets.org/topics/struggling-readers/articles/why-some-children-have-difficulties-learning-read> Child Mind Institute. (n.d.). Why is it important to read to your child? <https://childmind.org/article/why-is-it-important-to-read-to-your-child/> The Ohio State University College of Education and Human Ecology. (n.d.). The importance of reading to kids daily. <https://ehe.osu.edu/news/listing/importance-reading-kids-daily-0> National Assessment of Educational Progress. (n.d.). The Nation’s Report Card: Reading Achievement. <https://www.nationsreportcard.gov/reading/nation/achievement/?grade=4> Yale Center for Dyslexia & Creativity. (n.d.). Dyslexia FAQ. <https://dyslexia.yale.edu/dyslexia/dyslexia-faq/> Cognitive Market Research. (n.d.). English Language Learning Market Report. <https://www.cognitivemarketresearch.com/english-language-learning-market-report> Government of Canada. (n.d.). Official Languages and Bilingualism Publications: Statistics. <https://www.canada.ca/en/canadian-heritage/services/official-languages-bilingualism/publications/statistics.html> DoteFL. (n.d.). English Language Statistics. <https://www.dotefl.com/english-language-statistics/> National Center for Education Statistics. (n.d.). English Learners in Public Schools. <https://nces.ed.gov/programs/coe/indicator/cgf/english-learners> Statista. (n.d.). Resident population of Canada by age group. <https://www.statista.com/statistics/444868/canada-resident-population-by-age-group/> Statista. (n.d.). Population of the United States by sex and age. <https://www.statista.com/statistics/241488/population-of-the-us-by-sex-and-age/>
## Inspiration We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals. ## What it does ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language. ## How we built it We built the frontend UI using React, Javascript, HTML and CSS. For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM. Finally, for user authentication, we made use of Firebase. ## Challenges we faced + What we learned When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug. Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that. ## Accomplishments that we're proud of This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay. We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things. ## What's next for ReadRight As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
partial
## Inspiration We all understand how powerful ChatGPT is right now, and we thought it would be really cool to make it available to directly call ChatGPT for help. This not only saves time, it is also more convenient. People do not need to be in front of a computer to access ChatGPT, simply call a number and that is it. This also has the potential for accessibility, people who have disabilities in their eyes might struggle to access ChatGPT through their computer. Now, this will not be an issue. ## What it does An application that allows users to make a phone call to ChatGPT for easier access. Our goal with this project is to make ChatGPT more convenient and accessible. People can access it with just Wifi and a phone number. ## How we built it We use the TWILIO API to set up the call service. The call is connected to our backend code, which uses Flask and Twilio API. The code will receive speech from the user and translate it into text so that ChatGPT can understand it. The code will feed the text to ChatGPT through the OpenAi API. Finally, the result from ChatGPT will be fed back to the user through the call, and the user may choose between continuing the call or hanging up. Meanwhile, all the call history will be recorded and the user may access them through our website using a password generated by our code. ## Challenges we ran into There were a lot of challenges in the front end, believe it or now. Trying to design a good way to represent all the data that we collected from the calls, and connecting them from the backend to the front end. Also, setting up Twilio was kind of a challenge since no one on our team was familiar with anything about call services. ## Accomplishments that we're proud of We finished the majority of our code at a fairly fast speed. We are really proud of this. And this led us to explore more options. In the end, we did implement a lot more features into our project like a login system. Collecting call history, etc. ## What we learned We learned a lot of things. We never knew that services like Twilio existed, and we are genuinely impressed with what it can accomplish. Since we had some free time, we also learned something about lip-syncing with audio and videos using ML algorithms. Unfortunately, we did not implement this as it was way too much to do and we did not have enough time. We went to a lot of workshops. They had some really interesting stuff. ## What's next for our group We will ready up for the next Hackathon, and make sure we can do better.
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
## Inspiration: ChatTeach.io was inspired by the need for a more personalized and interactive learning experience. As college students, something we have all faced is the dreaded two-hour lecture video, where it is quite literally impossible to pay attention completely and to not get distracted. This oftentimes leads to important content in the video being missed, and a usually unhappy student. We saw a gap in the market for a system that could provide customized content to each individual user and offer them the opportunity to learn from virtual teachers that look and talk like humans, making the learning process more engaging and fun. ## What it Does: ChatTeach.io is an online learning platform that uses advanced AI technologies such as GPT and deepfake to provide a more personalized and interactive learning experience for users. Users can input their questions and receive responses in natural language from virtual teachers that are created using deepfake technology. Imagine having Spiderman teaching you physics or Zendaya teaching you differential equations. In this way, students are less likely to get bored and are more inclined to pay attention to the video. ## Accomplishments That We’re Proud Of: We use advanced AI technologies like GPT and deepfake to create a personalized and engaging learning experience. Our speech-to-text and text-to-speech conversions enable natural conversations between users and virtual teachers. This approach benefits both children, who can learn from their favorite superhero, and college students, who can make long lectures more interesting. We are proud of our technology and the positive impact it can have on learners of all ages. ## What We Learned: We learned about the potential of AI in education and the importance of accurate speech-to-text and text-to-speech conversion. We also learned to integrate APIs and collaborate efficiently. Our experience gave us a deeper appreciation for AI's capabilities and the power of teamwork in bringing innovative ideas to work. ## How We Built It: Our project involved several steps. Firstly, we integrated a speech-to-text API that transcribed the user's voice input into text. Then, we implemented ChatGPT to generate a relevant response to the user's question based on the script. Finally, we utilized a text-to-speech API to convert the text output into verbal speech. Our ultimate goal was to provide users with the ability to choose the voice and appearance of the virtual teacher in the video, allowing for a more personalized and engaging learning experience. ## Challenges We Ran Into: Challenges we faced when building this project were uploading a visual character onto the video screen (we were not able to implement this due to the constrained time period, but plan to in the future). We also ran into issues with choosing which API to use as many of them only worked on Windows computers, and all four members of our team have Macbooks. ## What is Next for Chat Teach: We envision expanding our project to offer users the ability to input custom audio and character selections, providing even greater personalization to the learning experience. For instance, Spiderman could teach physics with Whitney Houston's voice. Additionally, our AI-generated content has the potential to not only deliver pre-existing information but also to tailor content to each user's needs. By taking a simple quiz, a student watching an AP Calculus review video can indicate which chapters they already know, and the AI will generate a custom video without those sections. Our platform can also be used to conduct mock interviews by generating a random person for users to ask questions to. Overall, our aim is to offer a highly customizable and interactive learning tool with a wide range of potential applications. We hope that ChatTeach can be used to make learning more fun and engaging for students worldwide.
partial
## Inspiration The world is constantly chasing after smartphones with bigger screens and smaller bezels. But why wait for costly display technology, and why get rid of old phones that work just fine? We wanted to build an app to create the effect of the big screen using the power of multiple small screens. ## What it does InfiniScreen quickly and seamlessly links multiple smartphones to play videos across all of their screens. Breathe life into old phones by turning them into a portable TV. Make an eye-popping art piece. Display a digital sign in a way that is impossible to ignore. Or gather some friends and strangers and laugh at memes together. Creative possibilities abound. ## How we built it Forget Bluetooth, InfiniScreen seamlessly pairs nearby phones using ultrasonic communication! Once paired, devices communicate with a Heroku-powered server written in node.js, express.js, and socket.io for control and synchronization. After the device arrangement is specified and a YouTube video is chosen on the hosting phone, the server assigns each device a region of the video to play. Left/right sound channels are mapped based on each phone's location to provide true stereo sound support. Socket-emitted messages keep the devices in sync and provide play/pause functionality. ## Challenges we ran into We spent a lot of time trying to implement all functionality using the Bluetooth-based Nearby Connections API for Android, but ended up finding that pairing was slow and unreliable. The ultrasonic+socket.io based architecture we ended up using created a much more seamless experience but required a large rewrite. We also encountered many implementation challenges while creating the custom grid arrangement feature, and trying to figure out certain nuances of Android (file permissions, UI threads) cost us precious hours of sleep. ## Accomplishments that we're proud of It works! It felt great to take on a rather ambitious project and complete it without sacrificing any major functionality. The effect is pretty cool, too—we originally thought the phones might fall out of sync too easily, but this didn't turn out to be the case. The larger combined screen area also emphasizes our stereo sound feature, creating a surprisingly captivating experience. ## What we learned Bluetooth is a traitor. Mad respect for UI designers. ## What's next for InfiniScreen Support for different device orientations, and improved support for unusual aspect ratios. Larger selection of video sources (Dailymotion, Vimeo, random MP4 urls, etc.). Seeking/skip controls instead of just play/pause.
## About Us Discord Team Channel: #team-64 omridan#1377, dylan28#7389, jordanbelinsky#5302, Turja Chowdhury#6672 Domain.com domain: positivenews.space ## Inspiration Over the last year headlines across the globe have been overflowing with negative content which clouded over any positive information. In addition everyone has been so focused on what has been going on in other corners of the world and have not been focusing on their local community. We wanted to bring some pride and positivity back into everyone's individual community by spreading positive headlines at the users users location. Our hope is that our contribution shines a light in these darkest of times and spreads a message of positivity to everyone who needs it! ## What it does Our platform utilizes the general geolocation of the user along with a filtered API to produce positive articles about the users' local community. The page displays all the articles by showing the headlines and a brief summary and the user has the option to go directly to the source of the article or view the article on our platform. ## How we built it The core of our project uses the Aylien news API to gather news articles from a specified country and city while reading only positive sentiments from those articles. We then used the IPStack API to gather the users location via their IP Address. To reduce latency and to maximize efficiency we used JavaScript in tandem with React opposed to a backend solution to code a filtration of the data received from the API's to display the information and imbed the links. Finally using a combination of React, HTML, CSS and Bootstrap a clean, modern and positive design for the front end was created to display the information gathered by the API's. ## Challenges we ran into The most significant challenge we ran into while developing the website was determining the best way to filter through news articles and classify them as "positive". Due to time constraints the route we went with was to create a library of common keywords associated with negative news, filtering articles with the respective keywords out of the dictionary pulled from the API. ## Accomplishments that we're proud of We managed to support a standard Bootstrap layout comprised of a grid consisting of rows and columns to enable both responsive design for compatibility purposes, and display more content on every device. Also utilized React functionality to enable randomized background gradients from a selection of pre-defined options to add variety to the site's appearance. ## What we learned We learned a lot of valuable skills surrounding the aspect of remote group work. While designing this project, we were working across multiple frameworks and environments, which meant we couldn't rely on utilizing just one location for shared work. We made combined use of Repl.it for core HTML, CSS and Bootstrap and GitHub in conjunction with Visual Studio Code for the JavaScript and React workloads. While using these environments, we made use of Discord, IM Group Chats, and Zoom to allow for constant communication and breaking out into sub groups based on how work was being split up. ## What's next for The Good News In the future, the next major feature to be incorporated is one which we titled "Travel the World". This feature will utilize Google's Places API to incorporate an embedded Google Maps window in a pop-up modal, which will allow the user to search or navigate and drop a pin anywhere around the world. The location information from the Places API will replace those provided by the IPStack API to provide positive news from the desired location. This feature aims to allow users to experience positive news from all around the world, rather than just their local community. We also want to continue iterating over our design to maximize the user experience.
## Inspiration In many developed countries across the world, the population is rapidly aging. This poses a variety of issues to senior citizens, including social isolation, an overburdened healthcare system unable to meet their needs, and the widespread effects of neurodegenerative conditions. We aimed to build a solution which would address all three of these issues in a way which is easily accessible and empowering to senior citizens. ## What it does MemoryLane allows senior citizens to relive and share their cherished memories. The web application combines three main functionalities, which include a journaling and recall feature for important memories, an AI-powered match and chat system for users to discuss their experiences which are shared with other users, and an analytics dashboard which can be used by healthcare professionals to track key indicators of neurodegenerative conditions. Overall, MemoryLane allows users to not only keep their memories fresh but also weave a tapestry of connections with others with similar life experiences. ## How we built it In order to develop a clean and responsive front-end and versatile back-end, we used Reflex.dev to develop entirely in Python. We also used the InterSystems IRIS database to easily perform vector search as well as other database operations to support the backend functionalities required by MemoryLane. Additionally, we made use of the Together.AI inference API to generate embeddings to match users based on shared experiences, perform sentiment analysis to find trends within memory recall data, and to create sample data to test our web app with. Finally, we used Google Cloud to implement speech-to-text functionality to increase ease of access to our platform for senior citizens. The majority of our app was built with Python, with a little JavaScript. ## Challenges we ran into As 2 of our team members had never done full-stack dev before and one was attending his first hackathon, learning the nuances of new frameworks was initially a challenge, especially getting our environments set up. We’re incredibly grateful to the supportive mentors and sponsors for helping us get unstuck when we ran into issues, which indubitably helped us build our final product. ## Accomplishments that we're proud of We’re very proud of our clean, intuitive UI which aims to make the product as accessible as possible to our target audience, senior citizens. Additionally, we believe that MemoryLane is a truly unique product which fills a niche which hasn’t been focused on before social media for the elderly, especially in combination with its potential benefits of improving the healthcare industry by aggregating data about the elderly. Also, half of our team was able to go from near-zero web dev knowledge to familiarity with important tools and techniques, which we thought was very representative of the spirit of hackathons – coming together to meet new people and learn new things in a fast-paced creative environment. ## What we learned Our journey with MemoryLane has been an enlightening dive into several new technologies. We harnessed the power of Reflex.dev for frontend and full stack development, explored the nuances in our data with InterSystems IRIS’s vector search on text embeddings from TogetherAI, and learned how to bring text to life with Google Cloud. Together AI has also become our ally in understanding our users' needs and narratives with natural language processing. ## What's next for MemoryLane Looking to the horizon, we are definitely looking into expanding MemoryLane’s reach. Our roadmap includes scaling our solution and refining our data model to improve performance, and looking into business models which are sustainable and align with our mission. We envision forming partnerships with healthcare providers, memory care centers, and senior living communities. Integrating IoT could also redefine ease of use for seniors. Keeping innovation in mind, we'll dive deeper into Reflex's capabilities and explore bespoke AI models with Together AI. We aim to improve the technical aspects of our platform as well, including venturing into voice tone analysis to add another layer of emotional intelligence to our app. **We believe that MemoryLane is not just a walk in the past – it's a stride into the future of senior healthcare.**
winning
## About We are team #27 on discord, team members are: anna.m#8841, PawpatrollN#9367, FrozenTea#9601, bromainelettuce#8008. Domain.com challenge: [beatparty.tech] ## Inspiration Due to the current pandemic, we decided to create a way for people to exercise in the safety of their home, in a fun and competitive manner. ## What it does BeatParty is an augmented reality mobile app designed to display targets for the user to hit in a specific pattern to the beat of a song. The app has a leaderboard to promote healthy competition. ## How we built it We built BeatParty in Unity, using plug-ins from OpenPose, and echoAR's API for some models. ## Challenges we ran into Without native support from Apple ARkit and Google AR core, on the front camera, we had to instead use OpenPose, a plug-in that would not be able to take full advantage of the phone's processor, resulting in a lower quality image. ## What we learned **Unity:** * We learned how to implement libraries onto unity and how to manipulate elements within such folder. * We learned how to do the basics in Unity, such as making and creating hitboxes * We learned how to use music and create and destroy gameobjects. **UI:** * We learned how to implement various UI components such as making an animated logo alongside simpler things such as using buttons in Unity. ## What's next for BeatParty The tracking software can be further developed to be more accurate and respond faster to user movements. We plan to add an online multiplayer mode through our website ([beatparty.tech]). We also plan to use EchoAR to make better objects so that the user can interact with, (ex. The hitboxes or cosmetics). BeatParty is currently an android application and we have the intention to expand BeatParty to both IOS and Windows in the near future.
## Inspiration As students who listen to music to help with our productivity, we wanted to not only create a music sharing application but also a website to allow others to discover new music, all through where they are located. We were inspired by Pokemon-Go but wanted to create a similar implementation with music for any user to listen to. Anywhere. Anytime. ## What it does Meet Your Beat implements a live map where users are able to drop "beats" (a.k.a Spotify beacons). These beacons store a song on the map, allowing other users to click on the beacon and listen to the song. Using location data, users will be able to see other beacons posted around them that were created by others and have the ability to "tune into" the beacon by listening to the song stationed there. Multiple users can listen to the same beacon to simulate a "silent disco" as well. ## How I built it We first customized the Google Map API to be hosted on our website, as well as fetch the Spotify data for a beacon when a user places their beat. We then designed the website and began implementing the SQL database to hold the user data. ## Challenges I ran into * Having limited experience with Javascript and API usage * Hosting our domain through Google Cloud, which we were unaccustomed to ## Accomplishments that I'm proud of Our team is very proud of our ability to merge various elements for our website, such as the SQL database hosting the Spotify data for other users to access on the website. As well, we are proud of the fact that we learned so many new skills and languages to implement the API's and database ## What I learned We learned a variety of new skills and languages to help us gather the data to implement the website. Despite numerous challenges, all of us took away something new, such as web development, database querying, and API implementation ## What's next for Meet Your Beat * static beacons to have permanent stations at more notable landmarks. These static beacons could have songs with the highest ratings. * share beacons with friends * AR implementation * mobile app implementation
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
partial
## Inspiration One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track. ## What it does Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community! ## How we built it React front-end, MongoDB, Express REST server ## Challenges we ran into Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics. ## Completion In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics. ## What we learned Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch! ## What's next for IDNI - I Don't Need It! We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
## Inspiration We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online. ## What it does Recommends sustainable and local business alternatives when shopping online. ## How we built it Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB. ## Challenges we ran into Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging. ## Accomplishments that we're proud of Creating a working product! Successful end-to-end data pipeline. ## What we learned We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB. ## What's next for Conscious Consumer First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier.
## Inspiration Lyft's round up and donate system really inspired us here. We wanted to find a way to benefit both users and help society. We all want to give back somehow, but don't know how sometimes or maybe we want to donate, but don't really know how much to give back or if we could afford it. We wanted an easy way incorporated into our lives and spending habits. This would allow us to reach a wider amount of users and utilize the power of the consumer society. ## What it does With a chrome extension like "Heart of Gold", the user gets every purchase's round up to nearest dollar (for example: purchase of $9.50 has a round up of $10, so $0.50 gets tracked as the "round up") accumulated. The user gets to choose when they want to donate and which organization gets the money. ## How I built it We built a web app/chrome extension using Javascript/JQuery, HTML/CSS. Firebase javascript sdk library helped us store the calculations of the accumulation of round up's. We make an AJAX call to the Paypal API, so it took care of payment for us. ## Challenges I ran into For all of the team, it was our first time creating a chrome app extension. For most of the team, it was our first time heavily working with javascript let alone using technologies like Firebase and the Paypal API. Choose what technology/platform would make the most sense was tough, but the chrome extension would allow for more relevance since a lot of people make more online purchases nowadays and an extension can run in the background/seem omnivalent. So we picked up the javascript language to start creating the extension. Lisa Lu integrated the PayPal API to handle donations and used HTML/CSS/JavaScript to create the extension pop-up. She also styled the user interface. Firebase was also completely new to us, but we chose to use it because it didn't require us to have a two step process: a server (like Flask) + a database (like mySQL or MongoDB). It also helped that we had a mentor guide us through. We learned a lot about the Javascript language (mostly that we haven't even really scratched the surface of it), and the importance of avoiding race conditions. We also learned a lot about how to strategically structure our code system (having a background.js to run firebase database updates ## Accomplishments that I'm proud of Veni, vidi, vici. We came, we saw, we conquered. ## What I learned We all learned that there are multiple ways to create a product to solve a problem. ## What's next for Heart of Gold Heart of Gold has a lot of possibilities: partnering with companies that want to advertise to users and social good organizations, making recommendations to users on charities as well as places to shop, game-ify the experience, expanding capabilities of what a user could do with the round up money they accumulate. Before those big dreams, cleaning up the infrastructure would be very important too.
winning
## Inspiration We wanted a box that screams when you hit it because it would be funny ## What it does The box plays the first melody of the "Never Gonna Give You Up" song by Rick Astley when you hit it in piezo buzzer sounds (It is not a button, can sense force in any direction) ## How we built it We used an Arduino Uno, Mini-Breadboard, touch sensor, metal ball, and an 8-Ohms speaker ## Challenges we ran into 1. Lack of an SD-Card component for storing sound files 2. Arduino Uno did not run code when unplugged from the computer with a 9V battery 3. Electronic Stores were closed/too far away when we considered all needed materials 4. We could not find/make a screaming sound with piezo buzzer sounds (using tone function) 5. Not enough time to make a better design (We went with a simple version and barely finished) ## Accomplishments that we're proud of 1. The design works as intended (not with original intention) 2. We built a box that rick rolls 3. We worked for 12 hours straight on building (albeit with some needed breaks) 4. We did something 5. We worked with an Arduino Uno despite all of us having little experience with it 6. We built a contact sensor using a touch sensor and a metal ball 7. The design looks clean ## What we learned 1. How to test components related to the Arduino 2. How to disassemble phones without damaging circuits or parts 3. How the touch sensor component works 4. The importance of planning ahead of time 5 The tone/piezo buzzer for Arduinos ## What's next for Rick Roll Box 1. Remake the box with metal sheets 2. Implement the SD Card component 3. Install sound files 4. Make the box scream or rickroll (may decide either or) 5. Draw a punchable face on it 6. Secure components and increase their resistance to shock with screws/padding 7. Install LED effects that match with the sound played ## Outsourced Code We used rowan07's and slagestee's Rick Roll Piezo Buzzer code to incorporate the rickroll sounds into the design. Their code can be found here: [link](https://create.arduino.cc/projecthub/410027/rickroll-piezo-buzzer-a1cd11)
## Inspiration Learning how to communicate is one of the most critical life skills that a child needs to learn during their earliest stages of life. Additionally, monitoring a child's mental health, and providing the appropriate emotional support for a child in stressful situations are also crucial needs that parents are sometimes unable to fulfill. Our socially intelligent toy addresses both of these problem spaces by acting as a conversational partner that facilitates and maintains the child's mental health through heart rate detection and auditory cues. ## What it does The toy serves as a chat-bot that is able to converse with child on a daily basis. It's google cloud supported artificial intelligence allows it to not only fulfill basic requests, but also provide context appropriate comments that can help the child learn basic social and conversational skills. Our toy also includes a heart-rate sensor that checks for abnormalities in the child's heart rate, and alerts parents and adjust its conversational intentions accordingly. ## How we built it We build this project using the Google AIY voice-kit. We also used the pulse sensor to measure heart rate. We used the voice-kit to process and generate voice commands and data, and the chat bot to generate emotionally supportive and engaging responses. Using tensorflow and sequence to sequence ML, we attempted to improve upon existing chatbots by increasing the amount of training data they have access to. ## Challenges we ran into We ran into various challenges along the way. None of us have a lot of Machine Learning experience and struggled when we had to come up with the best way to train our chatbot. Furthermore, we struggled while figuring out the most efficient and effective way to parse through the databases that we found. ## Accomplishments that we're proud of The google voice kit is works extremely well and responds to all of our commands. Also we have a really cute unicorn stuffed animal now. ## What we learned We all got to work with a variety of unfamiliar technology during this hackathon. Although not all of our attempts were successful, trying to interface and connect each of these technologies was an extremely rewarding experience. ## What's next for Emotional and Social Support Toy Next, we want to build a speech translation system and further help children develop new language skills. We also want to be able to use Google's facial expression recognition system to generate responses based on the person's emotions. Another improvement we could make is to find ways to feed data back into the speech recognition system on a daily basis to further improve the chat bots' ability to communicate with children. We could also expand its application to healthcare contexts (i.e. hospitals could keep a few of these chatbots on hand to give to children when they don't have access to the emotional support of a direct family member or friend. The chatbot could converse with the child using language that she would understand, and act as a friend during that time of distress).
## Inspiration Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders. ## What it does Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout. ## How we built it The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive. ## Challenges we ran into > > I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry* > > > ## Accomplishments that we're proud of > > This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry* > > > ## What we learned > > Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry* > > > ## What's next for Quick Quote Things that are on our roadmap to improve Quick Quote: * Apply google analytics to track user's movement and collect feedbacks to enhance our UI. * Enhance our neural network model to enrich our knowledge base. * Train our data with more evalution to give more depth * Includes ads (mostly auto companies ads).
losing
## What it does Flutter is a web app that brings excitement to your daily routine while promoting socializing with your friends. Each day, you'll receive a dare via text message with a deadline to complete and share with your friends. Take a picture of yourself performing the dare and check out your friends' dashboards to see past challenges. ## Inspiration Creating a socializing app is important as a rebound to the effects of the COVID-19 pandemic because it provides a way for people to connect and form relationships while in-person interactions were limited for so long. Additionally, we hope to have a positive impact on mental health by providing a sense of community and connection for people who may be feeling isolated or lonely due to the pandemic. ## How we built it First, we set up the development environment and created the basic structure of the app using React. React allowed us to build reusable components of the app, such as the login page, user dashboards, and dare display. We used JavaScript for the main logic of the app, React allows us to use JavaScript to handle the dynamic data, such as the dares and user dashboards. For the user interface, we used CSS and HTML, we used CSS to style the components, and HTML to create the structure of the components. We also used the React webcam library in order to allow the user to capture the moment. Finally, we used the React Router library to handle routing in order to build the final version of the application that can be deployed to a web server. ## Challenges we ran into Getting started, we are all beginners and have very limited experience with react, so it took a couple hours to even get started! We had to become familiar with the React libraries in order to implement the webcam as well as learning how to redirect from one page to another. A couple challenges we faced when implementing Twilio API was figuring out how to properly format the code as well as understanding how to add a scheduling aspect to our messages being sent out. ## Accomplishments that we're proud of Flutter has a clean and user-friendly interface that is easy to navigate. We are proud that we were able to implement new technologies like the webcam and text messaging notifications. Since we are all beginner coders, we had to learn how to become familiar with React since we had little previous exposure to Javascript. ## What we learned We learned how to work effectively in a team, communicate effectively, and delegate tasks. We have also learned how to manage our time effectively and prioritize tasks in order to meet the deadline. As first time hackers, we had a lot to learn in terms of the technical skills required in app building, such as getting familiar with React as well as JavaScript and Twilio API. ## What's next for Flutter By adding levels to the social dares, users will have a greater sense of accomplishment as they progress through the dares and will be motivated to complete more challenging tasks. A point system will allow users to track their progress and see how they are doing compared to their friends. This will add a competitive aspect to the app and encourage users to complete more dares. A function that allows users to challenge a friend will add a social aspect to the app and make it more fun for users to complete dares together. This will also allow users to compete against their friends and see who can complete the most dares. Overall, these new features will make the app more engaging and enjoyable for users.
## Inspiration This past year, a pandemic unlike any other has resulted in the world coming to a screeching halt. The COVID-19 virus has changed the world and interpersonal relationships as we know it. Seemingly normal occurrences like visiting friends and family, attending on-campus classes, and going to work, have become nearly impossible because of the contagious nature of the virus. Even conducting essential activities like medical checkups and grocery shopping are plagued with the danger of getting infected. Despite wearing masks and taking precautions, the number of victims affected by COVID-19 increase every day by about 100,000 people (New York Times & Johns Hopkins University). According to the CDC, not only is the public plagued by the direct effects of the virus, they are also facing mental health issues by the inadvertent isolation from social distancing. In this project, we propose a mobile application through which users can network with friends, family, and their communities and trace their exposure to COVID-19. This will enable people to remain safe but also to conduct everyday activities/in-person events with less fear of exposure. ## What it does This application allows every member of society to create an account and update their COVID-19 status with an user friendly mobile interface. They are also able to connect with friends, family, and anyone they might come in contact with to trace exposure. Every time there is a warning or suspected exposure from their connections (and their connections' connections), the user will get notified. This mobile application will allow the users to: * remain safe and reassured in public settings because the app is very clear on who all they should social distance with. * more efficiently get tested for COVID-19 because sources of exposure will be conveniently traceable. Below, one can see exactly how this will impact current day to day activities and how this will enable more effective COVID-19 management. ## How we built it 1. Before implementation, we first storyboarded our plan for the application on Figma and set up the virtual environment for @protocol (using Flutter framework, Docker container, and Android Studio). 2. By using the flutter framework, we developed the front end design of the various pages and tracks in our application (login page, connections page, alerts page). 3. Following completion of the basic buttons on each page, we linked the pages to each other, beginning backend development. 4. One aspect that was particularly challenging was implementing an account saving system for users to be able to come back to the app but this was achieved by using @protocol. ## Challenges we ran into Our team (for 3 of whom, this is the first hackathon) had little to no experience working with other virtual environments and frameworks so the initial configuration process and learning of flutter use/implementation was time consuming. In addition, we are all are new to mobile app development and the Dart language that Flutter (the framework we were using) uses so learning that, especially backend development given limited documentation was also challenging. However, we found a few helpful resources and pursued our mobile application development/as opposed to more well documented web application development because of the ease and accessibility it provides to the user. ## Accomplishments that we're proud of Despite knowing very little about application development, we all persevered and worked together to learn a completely new framework and language. We were able to develop a prototype of a product that could be very useful for so many people and can improve the mental health and vitality for much of society. Our various time zones and budding experience proved to be no boundary in our shared goal of creating a viable app in 40 hours that would help so many people amidst the pandemic! ## What we learned Over the course of implementation, we learned how frameworks operate in mobile application development. Our team learned how to use Flutter and Android Studio to create a compatible application. In addition, our team also learned to use @protocol to implement log in functionality. ## What's next for COVID Tracer Expanding the capabilities of this application could yield very beneficial outcomes for the public. In the the next few months: * We want to expand backend functionality so that the user can manually add other users to their network. * We would also like to configure the Alert function so that it will send push notifications to everyone in the user's network. * We would love to implement a location tracking system which will help identify hotspots of COVID-19 where there is potential exposure based on users and COVID-19 status (positive or negative). This would make "super spreader" events or locations far more unlikely. In addition, public health officials could more effectively base vaccination and testing stations where need is higher. * We would also like to have a more wholistic account creation system (collecting more data from users about age, gender, race, etc.) through which we implement data analysis techniques based on geographic and demographic data to spot trends in contagion and infection, and more effectively address socio-economic inequity in healthcare.
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
losing
We learned lots.
## Inspiration An important aspect of health is being able to efficiently interact with healthcare professionals in order to get much needed help. Our group aimed to create an all-in-one health tracking app that would fulfill three main use cases. Firstly, it is common for people to misremember the details of symptoms they may have been experiencing when talking to their doctor. If somebody has fallen ill and wishes to track their symptom progression over time, when they go to the doctor, they can have easy access to real-time health stats describing their symptoms. This would ensure efficient and accurate interactions with somebody's doctor. A second use case we wanted to satisfy was the case that the user who owns the account is a caregiver for someone else (ex. a child, an elderly person). If someone's child is sick, it would be helpful to track their symptoms so that when they go to the doctor, they can accurately describe the progression of their child's symptoms. We wanted to enable this functionality through the use of multiple health "Profiles" that are connected to one account. A third use case we wanted to satisfy was the case that the user would like to simply track their health over time. ## What it does The figma satisfies all of our use cases. Our implementation satisfies the third use case. ## How we built it One of our group-mates made the Figma. The Figma is very exhaustive and includes essentially all of the functionality we intended on implementing. Another one of our group-mates worked on building out as much of the Figma design as possible using React and Tailwind. Two of our group-mates worked on building out the back-end using Node.js with an Express server. ## Challenges we ran into * we ran into a lot of issues with our back-end dev and dependency issues, this reduced the functionality ## Accomplishments that we're proud of We are proud of how nice our front-end and figma look, it gives a really good insight into how we wanted all of the functionality to look! Even though we weren't able to satisfy all of the functionality we wanted to, we are also proud of the fact that we were able to connect the front-end to a back-end express server. ## What we learned * On the back-end, we learned a lot about different database storage techniques and also how to set up endpoints on the server-side of the code base. * We also learned how to prototype on figma using components. * We also learned how to handle event clickers ## What's next for Harmony Health
## Inspiration Homelessness is a rampant problem in the US, with over half a million people facing homelessness daily. We want to empower these people to be able to have access to relevant information. Our goal is to pioneer technology that prioritizes the needs of displaced persons and tailor software to uniquely address the specific challenges of homelessness. ## What it does Most homeless people have basic cell phones with only calling and sms capabilities. Using kiva, they can use their cell phones to leverage technologies previously accessible with the internet. Users are able to text the number attached to kiva and interact with our intelligent chatbot to learn about nearby shelters and obtain directions to head to a shelter of their choice. ## How we built it We used freely available APIs such as Twilio and Google Cloud in order to create the beta version of kiva. We search for nearby shelters using the Google Maps API and communicate formatted results to the user’s cell phone Twilio’s SMS API. ## Challenges we ran into The biggest challenge was figuring out how to best utilize technology to help those with limited resources. It would be unreasonable to expect our target demographic to own smartphones and be able to download apps off the app market like many other customers would. Rather, we focused on providing a service that would maximize accessibility. Consequently, kiva is an SMS chat bot, as this allows the most users to access our product at the lowest cost. ## Accomplishments that we're proud of We succeeded in creating a minimum viable product that produced results! Our current model allows for homeless people to find a list of nearest shelters and obtain walking directions. We built the infrastructure of kiva to be flexible enough to include additional capabilities (i.e. weather and emergency alerts), thus providing a service that can be easily leveraged and expanded in the future. ## What we learned We learned that intimately understanding the particular needs of your target demographic is important when hacking for social good. Often, it’s easier to create a product and find people who it might apply to, but this is less realistic in philanthropic endeavors. Most applications these days tend to be web focused, but our product is better targeted to people facing homeslessness by using SMS capabilities. ## What's next for kiva Currently, kiva provides information on homeless shelters. We hope to be able to refine kiva to let users further customize their requests. In the future kiva should be able to provide information about other basic needs such as food and clothing. Additionally, we would love to see kiva as a crowdsourced information platform where people could mark certain places as shelter to improve our database and build a culture of alleviating homelessness.
losing
## Inspiration Our team focuses extensively on opportunities to ignite change through innovation. In this pursuit, we began to investigate the impact of sign-language glove technology on the hard of hearing community. In this, we learned that despite previous efforts to enhance accessibility, feedback from deaf advocates highlighted a critical gap: earlier technologies often facilitated communication for hearing individuals with the deaf, rather than empowering the deaf to communicate on their terms. After discussing this problem with a friend of ours who faces disabilities related to her hearing, we realized that this problem impacts many people's daily lives, significantly affecting their ability to engage with those around them. By focusing on human centered design and integrating feedback presented in numerous journals, we solve these problems by developing an accessible, easy to use interface that enables the hard of hearing and mute to converse seamlessly. Through integration of a wearable component and a sophisticated LLM, we completely change the landscape of interpersonal communications for the hard of hearing. ## What it does Our solution consists of two components, a wearable glove and a mobile video call interface. The wearable glove is meant to be utilized by a deaf or hard of hearing individual when conversing with another person. This glove, fitted with numerous flex sensors and an Inertial Measurement Unit (IMU), can discern what gloss (term for a word in ASL) the wearer is signing at any moment in time. From here, the data moves to the second component of the solution - the mobile video call interface. Here, the user's signs are converted into both text and speech during a call. The text is displayed on the screen while the speech is stated, ensuring to include emotional cues as picked up by the integrated computer vision model. This effectively helps users communicate with others, especially loved ones, in a manner that accurately represents their intent and emotions. This experience is one that is currently not offered anywhere else on the market. In tandem, both of these technologies enable us to understand body language, emotion, and signs from a user, and also help vocalize the feelings of a person who is hard of hearing. ## How we built it Two vastly different components call for drastically different approaches. However, we needed to ensure that these two approaches still stayed true to the same intent. We first began by identifying our design strategy, based in our problem statement and objective. From here, we moved forward with set goals and milestones. On the hardware side of things, we spent an extensive amount of time in the on-site lab fabricating our prototype. In order to ensure the validity of our design, we researched circuit diagrams and characteristics, ultimately building our own. We performed a variety of tests on this prototype, including practical use testing by taking it around campus while interacting with others. The glove withstood numerous handshakes and even a bit of rain! On the software side, we also had two problems to face - interfacing with the glove, and creating the mobile application. To interface with the glove, we began with the Arduino IDE for testing. After we ensured that our design was functional and gained test data, we moved to a python implementation that sends sensed words up to an API, which can later be accessed by the mobile application. Moving to the mobile application, we utilized SwiftUI for our design. From there, we used the StreamAPI to build a FaceTime style infrastructure. We prototyped and integrated between Figma designs and our prototype to best understand where we could increase capabilities and improve the user experience. ## Challenges we ran into This project was ambitious, and as such, was also chock full of complications. Initially, we faced extensive challenges on the hardware side. Due to the nature of the design, we have many components that are trying to draw power or ground from the same source. This provided increased complexity in our manufacturing process, as we had to come up with an innovative solution to a sleek design that maintained functionality. Even after we found our first solution, our prototype was inconsistent due to manufacturing flaws. On the last day, 2 hours before the submission deadline, we completely disassembled and rebuilt our prototype using a new methodology. This proved to be successful, minimizing the issues seen previously and resulting in an amazing product. On the software side, we also pursued ambitious desires that didn't distinctly align with our team's expertise. Due to this, we faced great difficulty when troubleshooting the numerous errors we faced in initial implementation. This set us back quite extensively, but we were able to successfully recover. ## Accomplishments that we're proud of We are proud of the magnitude of success we were able to show in the short frame of this hackathon. We came in knowing that we had ambitious and lofty goals, but were unsure if we would truly be able to achieve them. Thankfully, we complete this hackathon with a functional, viable MVP that clearly represents our goals and desires for this project. ## What we learned Because of the cross discipline nature of this project. All of our team members got the opportunity to explore new spaces. Through collaboration, we all learned about these fields and technologies from and with each other and how we can integrate them into our systems in the future. We also learned about best practices for manufacturing in general. Additionally, we were able to become more comfortable with SwiftUI and creating our own APIs for our video calling component. These valuable skills shaped our experiences at TreeHacks and will stick with us for many years to come. ## What's next for Show and Tell - Capturing Emotion in Sign Language We hope to continue to pursue this idea and bring independence to the hard of hearing population worldwide. In a market that has underserved the deaf population, we see Show and Tell as the optimal solution for accessibility. In the future, we want to flesh out the hardware prototype further by investing in custom PCB's, streamlining the production process and making it much more professional. Additionally, we want to build out functionality within the video calling app, adding in as many helpful features as possible.
## Inspiration In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language. ## What it does Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children. ## How we built it The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account. ## Challenges we ran into There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room. ## Accomplishments that we're proud of The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people. ## What we learned Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems. ## What's next for Hands-On The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner.
## Inspiration We wanted to create a web application that had significant social impact, and we decided that enabling a particular group of people (i.e. deaf people/people hard of hearing) to be more self-reliant was a good starting point. ## What it does Sign2Voice is a web application that is able to recognize American sign language gestures and translate them into text, and eventually into speech, bridging the communication gap between deaf and non-deaf individuals. ## How we built it We mainly used JavaScript in our web app to receive input from a client's webcam, make API requests to classify the gestures, and convert the resulting text into audio. We trained a custom model on Google's AutoML Vision for classification, and we used Watson's text-to-speech API to eventually translate it into speech. ## Challenges we ran into We faced a few technical challenges: Most notably, it was difficult to train a sufficiently-accurate classifier for the gestures, given the limited time and compute power that we had available. Additionally, it was a challenge to get the user interface exactly the way we wanted it to look and perform. ## Accomplishments that we're proud of We're proud of the usability and overall aesthetic of the user interface of our web application. Additionally, we're proud that we achieved a minimal viable product in a short span of time. ## What we learned We learned how to create a good user interface for a web application, about how to create efficient computer-to-computer interfaces, and picked up a few new technologies (e.g. Google Cloud SDK). We also learned how to work more effectively with other developers. ## What's next for Sign2Voice In the future, we hope to improve the accuracy and minmize the latency of our gesture classifier. We also plan to do more usability testing to improve the visual and interaction design of our user interface.
partial
## Inspiration When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body. As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately. ## What it does The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals. ## How we built it Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality. ## Challenges we ran into The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave. ## Accomplishments that we're proud of We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying. ## What we learned One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people. ## What's next for SkinSkan We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
## Inspiration As students around 16 years old, skin conditions such as acne make us even more self-conscious than we already are. Furthermore, one of our friends is currently suffering from eczema, so we decided to make an app relating to skin care. While brainstorming for ideas, we realized that the elderly are affected by more skin conditions than younger people. These skin diseases can easily transform into skin cancer if left unchecked. ## What it does Ewmu is an app that can assist people with various skin conditions. It utilizes machine learning to provide an accurate evaluation of the skin condition of an individual. After analyzing the skin, Ewmu returns some topical creams or over-the-top-medication that can alleviate the users' symptoms. ## How we built it We built Ewmu by splitting the project into 3 distinct parts. The first part involved developing and creating the Machine Learning backend model using Swift and the CoreML framework. This model was trained on datasets from Kaggle.com, which we procured over 16,000 images of various skin conditions ranging from atopic dermatitis to melanoma. 200 iterations were used to train the ML model, and it achieved over 99% training accuracy, and 62% validation accuracy and 54% testing accuracy. The second part involved deploying the ML model on a flask backend which provided an API endpoint for the frontend to call from and send the image to. The flask backend fed the image data to the ML model which gave the classification and label for the image. The result was then taken to the frontend where it was displayed. The frontend was built with React.JS and many libraries that created a dashboard for the user. In addition we used libraries to take a photo of the user and then encoded that image to a base64 string which was sent to the flask backend. ## Challenges we ran into Some challenges we ran into were deploying the ML model to a flask backend because of the compatibility issue between Apple and other platforms. Another challenge we ran into was the states within React and trying to get a still image from the webcam, then mapping it over to a base64 encode, then finally sending it over to the backend flask server which then returned a classification. ## Accomplishments that we're proud of * Skin condition classifier ML model + 99% training accuracy + 62% validation accuracy + 54% testing accuracy We're really proud of creating that machine learning model since we are all first time hackers and haven't used any ML or AI software tools before, which marked a huge learning experience and milestone for all of us. This includes learning how to use Swift on the day of, and also cobbling together multiple platforms and applications: backend, ML model, frontend. ## What we learned We learned that time management is all to crucial!! We're writing this within the last 5 minutes as we speak LMAO. From the technical side, we learned how to use React.js to build a working and nice UI/UX frontend, along with building a flask backend that could host our custom built ML model. The biggest thing we took away from this was being open to new ideas and learning all that we could under such a short time period! * TIL uoft kids love: ~~uwu~~ ## What's next for Ewmu We're planning on allowing dermatologists to connect with their patients on the website. Patients will be able to send photos of their skin condition to doctors.
winning
## Inspiration As a team, we've all witnessed the devastation of muscular-degenerative diseases, such as Parkinson's, on the family members of the afflicted. Because we didn't have enough money or resources or time to research and develop a new drug or other treatment for the disease, we wanted to make the medicine already available as effective as possible. So, we decided to focus on detection; the early the victim can recognize the disease and report it to his/her physician, the more effective the treatments we have become. ## What it does HandyTrack uses three tests: a Flex Test, which tests the ability of the user to bend their fingers into a fist, a Release Test, which tests the user's speed in releasing the fist, and a Tremor Test, which measures the user's hand stability. All three of these tests are stored and used to, over time, look for trends that may indicate symptoms of Parkinson's: a decrease in muscle strength and endurance (ability to make a fist), an increase in time spent releasing the fist (muscle stiffness), and an increase in hand tremors. ## How we built it For the software, we built the entirety of the application in the Arduino IDE using C++. As for the hardware, we used 4 continuous rotation servo motors, an Arduino Uno, an accelerometer, a microSD card, a flex sensor, and an absolute abundance of wires. We also used a 3D printer to make some rings for the users to put their individual fingers in. The 4 continuous rotation servos were used to provide resistance against the user's hands. The flex sensor, which is attached to the user's palm, is used to control the servos; the more bent the sensor is, the faster the servo rotation. The flex sensor is also used to measure the time it takes for the user to release the fist, a.k.a the time it takes for the sensor to return to the original position. The accelerometer is used to detect the changes in the user's hand's position, and changes in that position represent the user's hand tremors. All of this data is sent to the SD cards, which in turn allow us to review trends over time. ## Challenges we ran into Calibration was a real pain in the butt. Every time we changed the circuit, the flex sensor values would change. Also, developing accurate algorithms for the functions we wanted to write was kind of difficult. Time was a challenge as well; we had to stay up all night to put out a finished product. Also, because the hack is so hardware intensive, we only had one person working on the code for most of the time, which really limited our options for front-end development. If we had an extra team member, we probably could have made a much more user-friendly application that looks quite a bit cleaner. ## Accomplishments that we're proud of Honestly, we're happy that we got all of our functions running. It's kind of difficult only having one person code for most of the time. Also, we think our hardware is on-point. We mostly used cheap products and Arduino parts, yet we were able to make a device that can help users detect symptoms of muscular-degenerative diseases. ## What we learned We learned that we should always have a person dedicated to front-end development, because no matter how functional a program is, it also needs to be easily navigable. ## What's next for HandyTrack Well, we obviously need to make a much more user-friendly app. We would also want to create a database to store the values of multiple users, so that we can not only track individual users, but also to store data of our own and use the trends of different users to compare to the individuals, in order to create more accurate diagnostics.
## Inspiration Sign language is already difficult to learn; adding on the difficulty of learning movements from static online pictures makes it next to impossible to do without help. We came up with an elegant robotic solution to remedy this problem. ## What it does Handy Signbot is a tool that translates voice to sign language, displayed using a set of prosthetic arms. It is a multipurpose sign language device including uses such as: a teaching model for new students, a voice to sign translator for live events, or simply a communication device between voice and sign. ## How we built it **Physical**: The hand is built from 3D printed parts and is controlled by several servos and pulleys. Those are in turn controlled by Arduinos, housing all the calculations that allow for finger control and semi-spherical XYZ movement in the arm. The entire setup is enclosed and protected by a wooden frame. **Software**: The bulk of the movement control is written in NodeJS, using the Johnny-Five library for servo control. Voice to text is process using the Nuance API, and text to sign is created with our own database of sign movements. ## Challenges we ran into The Nuance library was not something we have worked with before, and took plenty of trial and error before we could eventually implement it. Other difficulties included successfully developing a database, and learning to recycle movements to create more with higher efficiency. ## Accomplishments that we're proud of From calculating inverse trigonometry to processing audio, several areas had to work together for anything to work at all. We are proud that we were able successfully combine so many different parts together for one big project. ## What we learned We learned about the importance of teamwork and friendship :) ## What's next for Handy Signbot -Creating a smaller scale model that is more realistic for a home environment, and significantly reducing cost at the same time. -Reimplement the LeapMotion to train the model for an increased vocabulary, and different accents (did you know you can have an accent in sign language too?).
## Inspiration The team has exposure to senior family members, whom are currently battling against mild Carpal Tunnel Syndrome. Current medical prevention methods involve extended use of a splint, forcing statuary positions. We decided to find a less assertive solution on the principle of raising awareness while retaining the freedom of movement. ## What it does The Smart Brace is a medical aid device used to serve as both a preventative and corrective measure for Carpal Tunnel Syndrome. Preventative, for users who have exhibited initial symptoms, however, have not been diagnosed with the syndrome. Corrective, for those who have been diagnosed and are working towards a more ergonomic wrist posture. ## How I built it The sensor measurements are collected and computed by an Arduino Nano 33 IoT (gyroscope, accelerometer, vibration motor). The calculated values are streamed to the Firebase Realtime Database from where the React-Native mobile application renders the data in meaningful visualizations for the user. ## Challenges I ran into The two primary challenges our team faced were streaming serial data from the Arduino to Firebase, and working with React-Native due to the lack of experience with the framework. ## Accomplishments that I'm proud of Overcoming obstacles, learning new technologies, and finishing a project with a high degree of accuracy. ## What I learned Technologies learned: Firebase, Firebase-Arduino communication, React-Native, Github Research: Carpal Tunnel syndrome due to the effect of various wrist angles. ## What's next for Smart Brace We are aiming to make the Smart Brace more user friendly by converting the proof of concept to operate with smart wearable technology. Furthermore, we hope to increase the accuracy of our research, add more visualizations, and offer a better UX by improving the mobile application.
winning
## Inspiration The inspiration for this project was a group-wide understanding that trying to scroll through a feed while your hands are dirty or in use is near impossible. We wanted to create a computer program to allow us to scroll through windows without coming into contact with the computer, for eating, chores, or any other time when you do not want to touch your computer. This idea evolved into moving the cursor around the screen and interacting with a computer window hands-free, making boring tasks, such as chores, more interesting and fun. ## What it does HandsFree allows users to control their computer without touching it. By tilting their head, moving their nose, or opening their mouth, the user can control scrolling, clicking, and cursor movement. This allows users to use their device while doing other things with their hands, such as doing chores around the house. Because HandsFree gives users complete **touchless** control, they’re able to scroll through social media, like posts, and do other tasks on their device, even when their hands are full. ## How we built it We used a DLib face feature tracking model to compare some parts of the face with others when the face moves around. To determine whether the user was staring at the screen, we compared the distance from the edge of the left eye and the left edge of the face to the edge of the right eye and the right edge of the face. We noticed that one of the distances was noticeably bigger than the other when the user has a tilted head. Once the distance of one side was larger by a certain amount, the scroll feature was disabled, and the user would get a message saying "not looking at camera." To determine which way and when to scroll the page, we compared the left edge of the face with the face's right edge. When the right edge was significantly higher than the left edge, then the page would scroll up. When the left edge was significantly higher than the right edge, the page would scroll down. If both edges had around the same Y coordinate, the page wouldn't scroll at all. To determine the cursor movement, we tracked the tip of the nose. We created an adjustable bounding box in the center of the users' face (based on the average values of the edges of the face). Whenever the nose left the box, the cursor would move at a constant speed in the nose's position relative to the center. To determine a click, we compared the top lip Y coordinate to the bottom lip Y coordinate. Whenever they moved apart by a certain distance, a click was activated. To reset the program, the user can look away from the camera, so the user can't track a face anymore. This will reset the cursor to the middle of the screen. For the GUI, we used Tkinter module, an interface to the Tk GUI toolkit in python, to generate the application's front-end interface. The tutorial site was built using simple HTML & CSS. ## Challenges we ran into We ran into several problems while working on this project. For example, we had trouble developing a system of judging whether a face has changed enough to move the cursor or scroll through the screen, calibrating the system and movements for different faces, and users not telling whether their faces were balanced. It took a lot of time looking into various mathematical relationships between the different points of someone's face. Next, to handle the calibration, we ran large numbers of tests, using different faces, distances from the screen, and the face's angle to a screen. To counter the last challenge, we added a box feature to the window displaying the user's face to visualize the distance they need to move to move the cursor. We used the calibrating tests to come up with default values for this box, but we made customizable constants so users can set their boxes according to their preferences. Users can also customize the scroll speed and mouse movement speed to their own liking. ## Accomplishments that we're proud of We are proud that we could create a finished product and expand on our idea *more* than what we had originally planned. Additionally, this project worked much better than expected and using it felt like a super power. ## What we learned We learned how to use facial recognition libraries in Python, how they work, and how they’re implemented. For some of us, this was our first experience with OpenCV, so it was interesting to create something new on the spot. Additionally, we learned how to use many new python libraries, and some of us learned about Python class structures. ## What's next for HandsFree The next step is getting this software on mobile. Of course, most users use social media on their phones, so porting this over to Android and iOS is the natural next step. This would reach a much wider audience, and allow for users to use this service across many different devices. Additionally, implementing this technology as a Chrome extension would make HandsFree more widely accessible.
## Inspiration We were looking at the Apple Magic TrackPads last week since they seemed pretty cool. But then we saw the price tag, $130! That's crazy! So we set out to create a college student budget friendly "magic" trackpad. ## What it does Papyr is a trackpad for your computer that is just a single sheet of paper, no wires, strings, or pressure detecting devices attached. Paypr allows you to browse the computer just like any other trackpad and supports clicking and scrolling. ## How we built it We use a webcam and a whole lot of computer vision to make the magic happen. The webcam first calibrates itself by detecting the four corners of the paper and maps every point on the sheet to a location on the actual screen. Our program then tracks the finger on the sheet by analyzing the video stream in real time, frame by frame, blurring, thresholding, performing canny edge detection, then detecting the contours in the final result. The furthest point in the hand’s contour corresponds to the user's fingertip and is translated into both movement and actions on the computer screen. Clicking is detailed in the next section, with scrolling is activated by double clicking. ## Challenges we ran into Light sensitivity proved to be very challenging since depending on the environment, the webcam would sometimes have trouble tracking our fingers. However, finding a way to detect clicking was by far the most difficult part of the project. The problem is the webcam has no sense of depth perception: it sees each frame as a 2D image and as a result there is no way to detect if your hand is on or off the paper. We turned to the Internet hoping for some previous work that would guide us in the right direction, but everything we found required either glass panels, infrared sensors, or other non college student budget friendly hardware. We were on our own. We made many attempts including: having the user press down very hard on the paper so that their skin would turn white and detect this change of color, track the shadow the user's finger makes on the paper and detect when the shadow disappears, which occurs when the user places his finger on the paper. None of these methods proved fruitful, so we sat down and for the better part of 5 hours thought about how to solve this issue. Finally, what worked for us was to track the “hand pixel” changes across several frames to detect a valid sequence that can qualify as a “click”. Given the 2D image perception with our web cam, it was no easy task and there was a lot of experimentation that went into this. ## Accomplishments that we're proud of We are extremely proud of getting clicking to work. It was no easy feat. We also developed our own algorithms for fingertip tracking and click detection and wrote code from scratch. We set out to create a cheap trackpad and we were able to. In the end we transformed a piece of paper, something that is portable and available nearly anywhere, into a makeshift-high tech device with only the help of a standard webcam. Also one of the team members was able to win a ranked game of Hearthstone using a piece of paper so that was cool (not the match shown in the video). ## What we learned From normalizing the environment's lighting and getting rid of surrounding noise to coming up with the algorithm to provide depth perception to a 2D camera, this project taught us a great deal about computer vision. We also learned about efficiency and scalability since numerous calculations need to be made each second in analyze each frame and everything going on in them. ## What's next for Papyr - A Paper TrackPad We would like to improve the accuracy and stability of Papyr. This would allow Papyr to function as a very cheap replacement for Wacom digital tablets. Papyr already supports various "pointers" such as fingers or pens.
## Inspiration There's a growing need for more intuitive and accessible web interfaces. Traditional methods like keyboards and mouses are limiting and cumbersome for individuals with disabilities or in situations where hands-free interactions is desired. ## What it does myEye leverages computer vision and AdHawk's eye-tracking software to create glasses that assist you with your digital needs. From basic scrolling to selecting YouTube videos, to playing whole video games, myEye customizations automate and enhance your digital experiences by providing an "invisible" mouse, and keyboard. myEye simulates the mouse through your gaze, and keyboard commands through hand gestures. ## How we built it myEye leverages AdHawk's Mindlink glasses for tracking eye movements (gaze, imu, etc.), and the actual software was built in Python. We are using Adhawk's Python SDK to interface with the device and stream data. Mediapipe and TensorFlow, for the computer vision interface and creating commands from just your hand movements. Numpy for data manipulation and calculations. Pyautogui to map gestures with keybinds, and many more packages to bring this platform together. ## Challenges we ran into The biggest challenge we ran into was transforming the vectors created from the eye-tracking software, into coordinates onto the screen that the viewer is looking at. Since the camera is on the right side of the glasses, and the eye tracking is in the middle, we would have to shift all of our vectors so that it matches the camera, as well as convert the vectors into coordinates. Generating these coordinates was really difficult and involved complex linear algebra calculations to scale, and orient mappings. ## Accomplishments that we're proud of Bypassing Adhawk's secure camera encryption Connecting all components together into a fast and feasible replacement of your digital accessories. We experimented with various API's to implement gaze tracking, screen detection, gesture controls and more. However, we found classic overlays, contouring, and self-serve, on-device ML provided faster processing. In the end, the long math equations, and model setups were so worth it. ## What we learned We increased our knowledge in game development, by aplpying concepts like ray tracing in order to provide accurate gazing for eye tracker. We learned about AdHawk's eye tracking systems, various python libraries and optimization techniques. ## What's next for myEye myEye plans to continue to build more features to enter a streamlined more exciting world powered by just our thoughts!
winning
## Summary OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource. ## Inspiration The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place! ## What it does OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation. ## How we built it This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain. ## Challenges we ran into Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology! ## Accomplishments that we're proud of One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end! ## What we learned * Fullstack Web Development (with React.js frontend development and Python Flask backend development) * Web3.0 & Security (with Solidity & Ethereum Blockchain) ## What's next for OrganSafe After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
Discord: * spicedGG#1470 * jeremy#1472 ![patientport logo](https://i.imgur.com/qWsX4Yw.png) ## 💡 Inspiration As healthcare is continuing to be more interconnected and advanced, patients and healthcare resources will always have to worry about data breaches and the misuses of private information. While healthcare facilities move their databases to third-party providers (Amazon, Google, Microsoft), patients become further distanced from accessing their own medical record history, and the complete infrastructure of healthcare networks are significantly at risk and threatened by malicious actors. Even a single damaging attack on a centralized storage solution can end up revealing much sensitive and revealing data. To combat this risk, we created Patientport as a decentralized and secure solution for patients to easily view the requests for their medical records and take action on them. ## 💻 What it does Patientport is a decentralized, secure, and open medical record solution. It is built on the Ethereum blockchain and securely stores all of your medical record requests, responses, and exchanges through smart contracts. Your medical data is encrypted and stored on the blockchain. By accessing the powerful web application online through <patientport.tech>, the patient can gain access to all these features. First, on the website, the patient authenticates to the blockchain via MetaMask, and provides the contract address that was provided to them from their primary care provider. Once they complete these two steps, a user has the ability to view all requests made about their medical record by viewing their “patientport” smart contract that is stored on the blockchain. For demo purposes, the instance of the Ethereum blockchain that the application connects to is hosted locally. However, anyone can compile and deploy the smart contracts on the Ethereum mainnet and connect to our web app. ## ⚙️ How we built it | | | | --- | --- | | **Application** | **Purpose** | | React, React Router, Chakra UI | Front-end web application | | Ethers, Solidity, MetaMask | Blockchain, Smart contracts | | Netlify | Hosting | | Figma, undraw.co | Design | ## 🧠 Challenges we ran into * Implementation of blockchain and smart contracts was very difficult, especially since the web3.js API was incompatible with the latest version of react, so we had to switch to a new, unfamiliar library, ethers. * We ran into many bugs and unfamiliar behavior when coding the smart contracts with Solidity due to our lack of experience with it. * Despite our goals and aspirations for the project, we had to settle to build a viable product quickly within the timeframe. ## 🏅 Accomplishments that we're proud of * Implementing a working and functioning prototype of our idea * Designing and developing a minimalist and clean user interface through a new UI library and reusable components with a integrated design * Working closely with Solidity and MetaMask to make an application that interfaces directly with the Ethereum blockchain * Creating and deploying smart contracts that communicate with each other and store patient data securely ## 📖 What we learned * How to work with the blockchain and smart contracts to make decentralized transactions that can accurately record and encrypt/decrypt transactions * How to work together and collaborate with developers in a remote environment via Github * How to use React to develop a fully-featured web application that users can access and interact with ## 🚀 What's next for patientport * Implementing more features, data, and information into patientport via a more robust smart contract and blockchain connections * Developing a solution for medical professionals to handle their patients’ data with patientport through a simplified interface of the blockchain wallet
Pondir is a social media platform that helps entrepreneurs connect with each other and potential investors. Through 5 key tools in the platform that are aimed towards various stages of a startup, aspiring entrepreneurs can utilize the social platform to expand their network, acquire local mentors and talent and find interested investors. There are millions of aspiring entrepreneurs but very few go and act out their ambitions. Ultimately, there are several common reasons why entrepreneurs fail often when they first start out. Lack of experience, lack of a network, lack of knowledge or just simple blind faith in their product. Through the use of our platform, the user would get feedback about their startup regardless of what stage it is in and this is especially useful for beginners. More experienced entrepreneurs still gain value from this site from using it as a platform to attract investors they would never had a chance to meet physically. Investors gain value from being able to access detailed startup profiles and if interested in any particular would recieve a chance to contact them to discuss further. Built using Bootstrap, HTML, CSS, JavaScript with back end using MongoDB and node.js, we had issues with integration and learned that hosting isn't a straight forward process. However, we are proud to have a clean frontend with good visuals and a detailed prototype of future features using Proto.io.
winning
## Inspiration ## What it does ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for Clout-Jar
## Inspiration ## What it does.. ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for .
## Inspiration Whether we're attending general staff meetings or getting together with classmates to work on a group project, no one *ever* wants to be the note taker. You can't pay as much attention to the meeting since you have to concentrate on taking notes, and when meetings get long, it's often difficult to maintain the attention required to take diligent notes. We wanted a way to stay more focused on the meeting rather than the memo. What if we could utilize the growing smart speaker market in conjunction with NLP algorithms to create comprehensive notes with code? ## What it does Scribblr takes notes for you! Not only that, but it automatically sends an e-blast to meeting attendees at the end and adds discussed deadlines and upcoming meetings to your calendar. Just tell Alexa to start the meeting. Carry out the meeting as normal and, when you're done, tell Alexa the meeting is over. She will immediately begin to create a summary of your meeting, making note of the most important discussion points: * Approaching deadlines and due dates * Newly scheduled meetings * The most important decisions that were made * A short paragraph summary of the overall meeting Additionally, anything discussed during the meeting associated with a date will be summarized into a calendar event: * Date and time of the event * Title of the event/task * Important topics associated with the event/task When the meeting is over, those who participated in the meeting will receive an email with the automated Alexa notes and all calendar events will be added to the official company calendar. Why take meeting notes when Alexa can do it for you? ## How it works The hack begins with an **Alexa skill**. We created a custom Alexa skill that allows the user to start and stop the meeting without skipping a beat. No more asking who is willing to take notes or hoping that the note-taker can keep up with the fast-pace -- just tell Alexa to start the meeting and carry on as normal. The meeting is then assigned a unique access code that is transmitted to our server via an **AWS Lambda Function** which initiates the audio recording. Upon completion of a meeting, Alexa makes a request to the server to transcribe the text using the **IBM Watson Speech to Text API**. But at the core of Scribblr are its **Natural Language Processing (NLP)** algorithms: * The final transcript is first preprocessed, involving tokenization, stemming, and automated punctuation. Automated punctuation is accomplished using **supervised machine learning**, entailing a **recurrent neural network model** trained on over 40 million words. * The Transcript Analyzer then integrates with the **IBM Watson Natural Language Understanding API** to detect keywords, topics, and concepts in order to determine the overarching theme of a meeting. We analyze the connections between these three categories to determine the most important topics discussed during the meeting which is later added to the email summary. * We also isolate dates and times to be added to the calendar. When a date or time is isolated, the NLP algorithms search surrounding text to determine an appropriate title as well as key points. Even keywords such as "today", "tomorrow", and "noon" will be identified and appropriately extracted. * Action items are isolated by searching for keywords in the transcript and these action items are processed by performing **POS tagging**, facilitated by a trained **machine-learning** module, ultimately being appended to the final meeting summary of the most important points discussed. ## Challenges we had There were a lot of moving parts to this hack. Many of the programs we used had dependencies that were incompatible or didn't have the functionality we needed. We often struggled to attempt to work around these conflicting dependencies and had to completely change our approach to the problem. The hardest part was bringing everything together as a singular product -- making them "talk" to each other so to speak. Since most of our code had to run on a server and we interfaced with a number of APIs, we had to manage multiple sets of credentials and deal with security measures through Google, Amazon, and IBM Watson, but what complicated this even more was that we were all developing on different machines, so what worked on one computer would fail on another and we had to work together to identify the tree of dependencies for each piece of the project. ## Accomplishments we're proud of It works! We worked down until the wire, troubleshooting compatibility issues all night and actually got all four very distinct components to work together seamlessly. This is something we would actually use in our daily lives, at club meetings, study groups, class project meetings, and staff meetings. ## Things we learned We learned a ton about NLP algorithms (as well as their limitations) and how to connect different pieces of a software system to a central server (which was arguably one of the hardest things we had to do, meaning we learned the most here). We also delved deeper into AWS Alexa: integrating Lambda functions, connecting to third-party applications, and publishing a skill. ## What's next for Scribblr We would like to add more functionality for commands during the meeting, such as updating emails, publishing to more than just a central calendar, pausing the meeting, and controlling remote devices directly using Alexa instead of having to go through a server to do so.
losing
## Inspiration Star Wars inspired us. ## What it does The BB-8 droid navigates its environment based on its user's gaze. The user simply has to change their gaze and the BB-8 will follow their eye-gaze. ## How we built it We built it with an RC car kit placed into a styrofoam sphere. The RC car was created with an Arduino and Raspberry Pi. The Arduino controlled the motor controllers, while the Raspberry Pi acted as a Bluetooth module and sent commands to the Arduino. A separate laptop was used with eye-tracking glasses from AdHawk to send data to the Raspberry Pi. ## Challenges we ran into We ran into issues with the magnets not being strong enough to keep BB-8's head on. We also found that BB-8 was too top-heavy, and the RC car on the inside would sometimes roll over, causing the motors to spin out and stop moving the droid. ## Accomplishments that we're proud of We are proud of being able to properly develop the communication stack with Bluetooth. It was complex connecting the Arduino -> Raspberry Pi -> Computer -> Eye-tracking glasses. ## What we learned We learned about AdHawk's eye tracking systems and developed a mechanical apparatus for BB-8/ ## What's next for BB8 Droid We would like to find proper magnets to add the head for BB-8
## Inspiration We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two. ## What it does Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal. ## How we built it Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions. Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules. Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world. ## Challenges we ran into The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth. The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion. The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage. The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa. ## Accomplishments that we're proud of The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops. Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles . ## What we learned We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience. ## What's next for Todd: The Inter-dimensional Bot Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
## Inspiration Both Karthik and I experimented with fancy LED circuitry in high school. He wired up a poster board with a bunch of LED's arranged in shapes of letters and I created a 6x26 LED matrix display that could handle things like scrolling text. After perusing on the internet, we came across the persistence of vision display, which seemed to be the logical next step in our hardware development journey. ## What it does The persistence of vision display creates an optical illusion by blinking LED's very quickly while rotating along an axis. In reality it is just rotating a bunch of modulating lights but our eyes blend these discrete images to form a single image. ## How I built it We used many electronics components and mechanical parts including perfboard, LED's, wire, resistors, DC motor, wood, magnets, batteries, MSP 430 chip, and hall effect sensor. We soldered the LED's and MSP 430 to the perfboard and connected through the 16 output ports. We had 15 LED's so the 16th port was taken up by the hall effect sensor. In order to mount this display to the motor, we drilled two holes in the perfboard at measured points and attached an elaborate shaft collar with set screw. Since the motor would be rotating at ridiculous speeds, we wanted to safely mount the device to something, which we chose to be a plank of wood that could be clamped down onto a flat surface. Now the device was spinning and displaying lights that were somewhat nonsensical so we decided to implement a hall effect sensor with two magnets positioned on the wooden plank so that the device would know when it had made a complete rotation, an aspect we could used to display exactly one frame. The project was coded in Energia which is quite close to C. ## Challenges I ran into The project was actually much more complicated when we dissected it into parts. Every stage of this project had its unique problems such as the mounting and finding a correct motor. However, one major issue was having a power supply for the LED's and MSP 430 to run the code. It had to rotate with the rest of the rig so we quickly realized that using a coin cell battery would be good since they are light and easy to attach to perfboard. The hall sensor setup was confusing since we had to mount it on the opposite side of the perfboard. We ended up chopping out a segment of another perfboard and sandwiching it on the LED side to create a surface to solder onto. Adjusting the offset in code to make use of the hall effect sensor's readings and restart a frame after each return to a 0 value on the hall effect sensor took some time since we had to fiddle around with different algorithms to reduce delay. ## Accomplishments that I'm proud of We got a working project after many hours and actually overcame all of our very glaring obstacles. We also managed to learn quite a bit of Mechanical engineering, a topic that both of us are not so familiar with. ## What I learned Mechanical components make these hardware projects more difficult than they may seem at first. I learned about using the hall effect sensor as an encoder to clean up the image on the POV display and tiny tips and tricks in mechanical engineering about how to mount and selecting materials that can withstand the rotation and not cause problems. ## What's next for POV Display We can expand to a 3D display next time or use a motor with higher rpm to produce images with better resolution.
winning
## About the Project NazAR is an educational tool that automatically creates interactive visualizations of math word problems in AR, requiring nothing more than an iPhone. ## Behind the Name *Nazar* means “vision” in Arabic, which symbolizes the driving goal behind our app – not only do we visualize math problems for students, but we also strive to represent a vision for a more inclusive, accessible and tech-friendly future for education. And, it ends with AR, hence *NazAR* :) ## Inspiration The inspiration for this project came from each of our own unique experiences with interactive learning. As an example, we want to showcase two of the team members’ experiences, Mohamed and Rayan’s. Mohamed Musa moved to the US when he was 12, coming from a village in Sudan where he grew up and received his primary education. He did not speak English and struggled until he had an experience with a teacher that transformed his entire learning experience through experiential and interactive learning. From then on, applying those principles, Mohamed was able to pick up English fluently within a few months and reached the top of his class in both science and mathematics. Rayan Ansari had worked with many Syrian refugee students on a catch-up curriculum. One of his students, a 15 year-old named Jamal, had not received schooling since Kindergarten and did not understand arithmetic and the abstractions used to represent it. Intuitively, the only means Rayan felt he could effectively teach Jamal and bridge the connection would be through physical examples that Jamal could envision or interact with. From the diverse experiences of the team members, it was glaringly clear that creating an accessible and flexible interactive learning software would be invaluable in bringing this sort of transformative experience to any student’s work. We were determined to develop a platform that could achieve this goal without having its questions pre-curated or requiring the aid of a teacher, tutor, or parent to help provide this sort of time-intensive education experience to them. ## What it does Upon opening the app, the student is presented with a camera view, and can press the snapshot button on the screen to scan a homework problem. Our computer vision model then uses neural network-based text detection to process the scanned question, and passes the extracted text to our NLP model. Our NLP text processing model runs fully integrated into Swift as a Python script, and extracts from the question a set of characters to create in AR, along with objects and their quantities, that represent the initial problem setup. For example, for the question “Sally has twelve apples and John has three. If Sally gives five of her apples to John, how many apples does John have now?”, our model identifies that two characters should be drawn: Sally and John, and the setup should show them with twelve and three apples, respectively. The app then draws this setup using the Apple RealityKit development space, with the characters and objects described in the problem overlayed. The setup is interactive, and the user is able to move the objects around the screen, reassigning them between characters. When the position of the environment reflects the correct answer, the app verifies it, congratulates the student, and moves onto the next question. Additionally, the characters are dynamic and expressive, displaying idle movement and reactions rather than appearing frozen in the AR environment. ## How we built it Our app relies on three main components, each of which we built from the ground up to best tackle the task at hand: a computer vision (CV) component that processes the camera feed into text: an NLP model that extracts and organizes information about the initial problem setup; and an augmented-reality (AR) component that creates an interactive, immersive environment for the student to solve the problem. We implemented the computer vision component to perform image-to-text conversion using the Apple’s Vision framework model, trained on a convolutional neural network with hundreds of thousands of data points. We customize user experience with a snapshot button that allows the student to position their in front of a question and press it to capture an image, which is then converted to a string, and passed off to the NLP model. Our NLP model, which we developed completely from scratch for this app, runs as a Python script, and is integrated into Swift using a version of PythonKit we custom-modified to configure for iOS. It works by first tokenizing and lemmatizing the text using spaCy, and then using numeric terms as pivot points for a prioritized search relying on English grammatical rules to match each numeric term to a character, an object and a verb (action). The model is able to successfully match objects to characters even when they aren’t explicitly specified (e.g. for Sally in “Ralph has four melons, and Sally has six”) and, by using the proximate preceding verb of each numeric term as the basis for an inclusion-exclusion criteria, is also able to successfully account for extraneous information such as statements about characters receiving or giving objects, which shouldn’t be included in the initial setup. Our model also accounts for characters that do not possess any objects to begin with, but who should be drawn in the display environment as they may receive objects as part of the solution to the question. It directly returns filenames that should be executed by the AR code. Our AR model functions from the moment a homework problem is read. Using Apple’s RealityKit environment, the software determines the plane of the paper in which we will anchor our interactive learning space. The NLP model passes objects of interest which correspond to particular USDZ assets in our library, as well as a vibrant background terrain. In our testing, we used multiple models for hand tracking and gesture classification, including a CoreML model, a custom SDK for gesture classification, a Tensorflow model, and our own gesture processing class paired with Apple’s hand pose detection library. For the purposes of Treehacks, we figured it would be most reasonable to stick with touchscreen manipulation, especially for our demo that utilizes the iPhone device itself without being worn with a separate accessory. We found this to also provide better ease of use when interacting with the environment and to be most accessible, given hardware constraints (we did not have a HoloKit Apple accessory nor the upcoming Apple AR glasses). ## Challenges we ran into We ran into several challenges while implementing our project, which was somewhat expected given the considerable number of components we had, as well as the novelty of our implementation. One of the first challenges we had was a lack of access to wearable hardware, such as HoloKits or HoloLenses. We decided based on this, as well as a desire to make our app as accessible and scalable as possible without requiring the purchase of expensive equipment by the user, to be able to reach as many people who need it as possible. Another issue we ran into was with hand gesture classification. Very little work has been done on this in Swift environments, and there was little to no documentation on hand tracking available to us. As a result, we wrote and experimented with several different models, including training our own deep learning model that can identify gestures, but it took a toll on our laptop’s resources. At the end we got it working, but are not using it for our demo as it currently experiences some lag. In the future, we aim to run our own gesture tracking model on the cloud, which we will train on over 24,000 images, in order to provide lag-free hand tracking. The final major issue we encountered was the lack of interoperability between Apple’s iOS development environment and other systems, for example with running our NLP code, which requires input from the computer vision model, and has to pass the extracted data on to the AR algorithm. We have been continually working to overcome this challenge, including by modifying the PythonKit package to bundle a Python interpreter alongside the other application assets, so that Python scripts can be successfully run on the end machine. We also used input and output to text files to allow our Python NLP script to more easily interact with the Swift code. ## Accomplishments we're proud of We built our computer vision and NLP models completely from the ground up during the Hackathon, and also developed multiple hand-tracking models on our own, overcoming the lack of documentation for hand detection in Swift. Additionally, we’re proud of the novelty of our design. Existing models that provide interactive problem visualization all rely on custom QR codes embedded with the questions that load pre-written environments, or rely on a set of pre-curated models; and Photomath, the only major app that takes a real-time image-to-text approach, lacks support for word problems. In contrast, our app integrates directly with existing math problems, and doesn’t require any additional work on the part of students, teachers or textbook writers in order to function. Additionally, by relying only on an iPhone and an optional HoloKit accessory for hand-tracking which is not vital to the application (which at a retail price of $129 is far more scalable than VR sets that typically cost thousands of dollars), we maximize accessibility to our platform not only in the US, but around the world, where it has the potential to complement instructional efforts in developing countries where educational systems lack sufficient resources to provide enough one-on-one support to students. We’re eager to have NazAR make a global impact on improving students’ comfortability and experience with math in coming years. ## What we learned * We learnt a lot from building the tracking models, which haven’t really been done for iOS and there’s practically no Swift documentation available for. * We are truly operating on a new frontier as there is little to no work done in the field we are looking at * We will have to manually build a lot of different architectures as a lot of technologies related to our project are not open source yet. We’ve already been making progress on this front, and plan to do far more in the coming weeks as we work towards a stable release of our app. ## What's next for NazAR * Having the app animate the correct answer (e.g. Bob handing apples one at a time to Sally) * Animating algorithmic approaches and code solutions for data structures and algorithms classes * Being able to automatically produce additional practice problems similar to those provided by the user * Using cosine similarity to automatically make terrains mirror the problem description (e.g. show an orchard if the question is about apple picking, or a savannah if giraffes are involved) * And more!
## Inspiration Helping people who are visually and/or hearing impaired to have better and safer interactions. ## What it does The sensor beeps when the user comes too close to an object or too close to a hot beverage/food. The sign language recognition system translates sign language from a hearing impaired individual to english for a caregiver. The glasses capture pictures of surroundings and convert them into speech for a visually imapired user. ## How we built it We used Microsoft Azure's vision API,Open CV,Scikit Learn, Numpy, Django + REST Framework, to build the technology. ## Challenges we ran into Making sure the computer recognizes the different signs. ## Accomplishments that we're proud of Making a glove with a sensor that helps user navigate their path, recognizing sign language, and converting images of surroundings to speech. ## What we learned Different technologies such as Azure, OpenCV ## What's next for Spectrum Vision Hoping to gain more funding to increase the scale of the project.
## Inspiration In modern day, **technology is everywhere**. Parents equip their children with devices at a young age, so why not take advantage of this and improve their learning! We want to help children in recognizing objects, and as well as pronouncing them (via text to speech). ## What it does *NVision* teaches children the names of everyday objects in which they encounter (by taking a picture of it). The app then says the object name out loud. ## How we built it We developed *NVision* in Android Studio. Java and XML was used to program the back-end, including various .json libraries. For our image recognition, we used Google Vision, communicating to our app with post requests. We also used domain.com to create a website to advertise our application on the web, which can be viewed here: [NVision](http://nvisionedu.com/) ## Challenges we ran into At first, we encountered issues in implementing the camera feature into our app. Android studio is not exactly the most friendly programming interface! After a lot of debugging and some mentorship, we were able to get it working. Another challenge we faced was using the APIs within our app. We used Google Vision image recognition API to return a .json file corresponding to the image details. This required our app to communicate with the Google server, and none of us had experience with implementing network capabilities to our software. Lastly, integrating our code together was a challenge, because we each worked separately, and we used used different libraries, code, and software. This app was relatively complex, so each of our parts were vastly different. We needed to first communicate with the camera, send the image to the server, retrieve the .json file, and parse it to a string array to be read outloud in a text to speech. Near the end of the 36 hours, we spent a lot of time simply putting together the pieces and making sure the app would run properly. ## Accomplishments that we are proud of With most of the team being new to Android development, this was definitely a difficult and daunting task at first. We are proud that we finished with a functional app that has most of the features we wanted to include. In addition, no one on our team had experience with APIs, so we are happy with what we have created as a team. ## What we learned Apart from the technical skills we gained, we learned that communication and teamwork is a crucial part of success when working on projects. By dividing the workload and having teammates to rely on really helped us be more efficient overall. Also, the level of programming we had to do was far beyond everything we did in school, so we had to both brush up on our comprehension as well as our coding skills. We also realized that our own knowledge could be severely lacking at times, and we should ask for help when needed. ## What's next for *NVision* In the future, we would like to expand *NVision* to target other cultures. There are many children around the world that do not speak English as their first language, so we'd like to have our application relay the object detected in other languages. This will help kids that not only do not speak English, but also those that are trying to learn a new language. We already have a way to change the language of the text to speech, available in German as well as English, but we would have to use Google translate API to fully realize it's potential. Furthermore, we realize that young children spending too much time in front of their screen would be detrimental to their health. We wish to implement parental controls as well as a timer to limit the use of our app. Parents will also be able to track the progress of their children's learning, perhaps by integrating a feedback system, such as a microphone, to tell if their children are advancing their vocabulary.
winning
## Inspiration Are you stressed about COVID-19? You're not alone. Over 50% of adults in US reported their mental health was [negatively affected due to worry and stress over the new coronavirus](https://www.healthline.com/health-news/covid-19-lockdowns-appear-to-be-causing-more-cases-of-high-blood-pressure). Cities lockdown and the implementation of COVID-19 measures affected many industries, and many lost their jobs during this period. We have long known that campfires and fires in a hearth help us to relax, [as fire played a key role in human evolution and was closely linked to human psychology](https://www.scotsman.com/news/uk-news/relaxing-fire-has-good-your-health-1520224). Here, we are presenting a solution that helps people to relax while generating an extra source of income. ## What it does CryptoChill fixes the main problem that people tend to have with the classic Yulelog (virtual fireplace) - the fact being that it does not actually generate heat. CryptoChill provides users with a way to relax and get warm while engaging in a warm (No pun intended!) community via live chat where users can talk to each other. CryptoChill generates heat using the users' computer to mine cryptocurrency, thus warming the user, and in exchange, CryptoChill subsidies the users' electricity costs with a revenue split between the user and the platform. This enables the user to stay warm in an environmentally friendly way as on average, electric heating is many times cleaner than gas or oil. ### Features 1. Users can view the description of the site on the landing page 2. Users give their consent for cryptocurrency mining by signing up/login 3. After logging in, users can search & browse popular Yulelog/fireplace videos, or enter an URL of their favorite video. They can save videos as favorites. 4. They can browse their saved videos on the favorite page 5. Each page allows users to listen to music from Spotify as they watch the Yulelog videos. 6. When a user plays a video from the carousel of videos, cryptocurrency will be generated in the background. 7. Users can also create a chat room where other users can join and talk among themselves, listen to music, and watch the Yulelog/fireplace videos together. ## How we built it * UI design using Figma * Front-end development using HTML, CSS, JavaScript * Back-end development using Node.js, Express, and EJS * Real-time communication using Socket.io * Client side CPU cryptocurrency mining via Webminepool * User creation and authentication using Firebase * Application hosting through Google Cloud Platform's App Engine ## Challenges we ran into We are an international team with 3 time zones. Not all of us are awake at the same time, and it was a challenge to find a meeting time that suits everyone. We overcame the challenge by effective communication and planning via Discord, Zoom, and Trello. ## Accomplishments that we're proud of We built a full application, and completed what we envisioned to do in the beginning! ## What we learned We learnt a lot about implementing live chat using socket.io and authentication using Firebase, and integrating front-end and back-end. We also learned how to incorporate cryptocurrency mining with our application! ## What's next for CryptoChill * Link to cryptocurrency wallet * Choice of algorithm & cryptocurrency * GPU mining to increase payouts and heat output via GPU.js
## Inspiration Cryptocurrency and blockchain mining is very ecologically harmful ## What it does Ecoin, short for Eco-coin is an eco-friendly cryptocurrency that can mitigate the effects of traditional blockchain mining. It achieves this by placing certain restrictions on the miners based on the fuel mix of the power grid in their physical location. ## How we built it How it works: EcoCoin uses geolocation for miner verification. We use the physical location of the miner to locate them within an area which has at least 50% of the electric grid powered by renewable resources. We use the EPA EGRID dataset to determine the fuel mix of the electricity generated in a particular zipcode. We use Google Geolocation API to determine the zipcode of the miner. If this is the case (>50% renewable energy usage in fuel mix), the miner input is validated in the mining pool otherwise it is rejected. This restriction allows the mining process to be much less damaging and more eco friendly than the traditional blockchain and bitcoin mining process. React Native app EPA EGRID Dataset Google Geolocation API # References: <https://www.ctvnews.ca/sci-tech/bitcoin-uses-more-electricity-than-argentina-norway-study-finds-1.5327345> <https://www.financialexpress.com/industry/bitcoin-shocker-cryptos-rise-may-soon-leave-carbon-footprint-equivalent-to-size-of-londons-emissions/2215513/> <https://www.bbc.com/news/science-environment-56215787> <https://www.bloomberg.com/opinion/articles/2021-01-26/is-bitcoin-mining-worth-the-environmental-cost> <https://www.sciencedirect.com/science/article/pii/S2542435119302557> ## Challenges we ran into Devpost blocked our project due to the placeholder video being a rickroll ## Accomplishments that we're proud of a working blockchain ## What we learned how to fix geolocation API code issues ## What's next for ecoin beta launch
## Inspiration I really like to make game using programming ang wanted to became a game developer in future ## What it does This is a fun game made using python. In this game we have to help ninja to no to collide with obstacles so that it can pass easily .. ## How I built it I built it using various modules of python mainly pygame module ## Challenges I ran into Many challanges are arrived in between this game such that screen bliting issues,pipe width,background change etc but at last i overcame all this problems and finally made a game ## Accomplishments that I'm proud of I'm proud of that this is my second game after snake game using python ## What I learned python is really a very great language to built a project and i learned to use many modules of python for doing this project ## What's next for Flappy Ninja (Game) Made using python Next I mainly make projects using pythin .Currently i am working on automatic alarm project using python
losing