anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
It's kinda funny Room #69 (nice)
## Inspiration and what it does As smart as our homes (or offices) become, they do not fully account for the larger patterns in electricity grids and weather systems. their environments. They still waste energy cooling empty buildings, or waste money by purchasing electricity during peak periods. Our project, The COOLest hACk, solves these problems. We use sensors to detect both ambient temperature in the room and on-body temperature. We also increase the amount of cooling when electricity prices are cheaper, which in effect uses your building as a energy storage device. These features simultaneously save you money and the environment. ## How we built it We built it using particle photons and infrared and ambient temperature sensors. These photons also control a fan motor and leds, representing air conditioning. We have a machine learning stack to forecast electricity prices. Finally, we built an iPhone app to show what's happening behind the scenes ## Challenges we ran into Our differential equation models for room temperature were not solvable, so we used a stepwise approach. In addition, we needed to find a reliable source of time-of-day peek electricity prices. ## Accomplishments that we're proud of We're proud that we created an impactful system to reduce energy used by the #1 energy hungry appliance, Air Conditioning. Our solution has minimal costs and works through automated means. ## What we learned We learned how to work with hardware, Photons, and Azure. ## What's next for The COOLest hACk For the developers: sleep, at the right temperature ~:~
## Inspiration The inspiration behind our project stems from the personal challenges we’ve faced in the past with public speaking and presentations. Like many others, we often found ourselves feeling overwhelmed by nerves, shaky voices, and the pressure of presenting in front of peers. This anxiety would sometimes make it difficult to communicate ideas effectively, and the fear of judgment made the experience even more daunting. Recognizing that we weren’t alone in these feelings, we wanted to create a solution that could help others overcome similar hurdles. That’s where Vocis comes in. Its aim is to give people the freedom and the ability to practice their presentation skills at their own pace, in a safe, supportive environment. Whether it’s practicing for a school project, a work presentation, or simply building the confidence to speak in front of others, the platform allows users to refine their delivery. ## What It Does Our project aims to simulate real-life challenges that presenters might face. For example, handling difficult situations like Q&A sessions, dealing with hecklers, or responding to aggressive viewers. By creating these simulated scenarios, our software prepares users for the unpredictability of live presentations. We hope that by giving people the tools and the settings to practice on their own terms, they can gradually build the skills and self-assurance needed to present with ease in any setting. ## Tech Stack - How Vocis is built ReactJs Shadcn NextJs TailwindCSS Hume.Ai OpenAI ## Challenges We Faced During our hackathon, one of the key challenges we faced was the need to dive into extensive documentation while working on API implementation as we had never worked with Hume before. Not only that, as all of us don’t have much experience with the backend of an app, it was really taxing to learn and implement at the same time. This task, which is already time-consuming, became more difficult due to unstable internet connectivity. This led to unexpected delays in accessing resources and troubleshooting problems in real time, which put additional pressure on our timeline. Despite these setbacks, our team worked hard to adapt and maintain momentum. ## Accomplishments Despite the challenges we faced, we were able to make a functional prototype at the very least that displays the core of our program which is simulating real-life difficult scenarios for presenters and public-speakers. At least, the very bare bones and we’re very proud of ourselves for being able to do that and create a wonderful project. ## What We Learned We learned to create a viable project in limited time allowing us to overcome our shortcomings in our ability to create a project Through multiple workshops and gaining insightful help from mentors, we learned more about APIs, implementing APIs, and making sure they cooperate with each other and streamlining the process. We also learned a lot of new, cool and amazing technologies created by a lot of amazing people that allowed us to achieve the aim of our project ## What’s Next For Vocis We allow multiple users to present at the same time and the AI can create situations for multiple “panelists” We allow many more situations that panelists and presenters may face like many different types of aggressive people, journalists that are a little too overbearing. We add reactions of the audience that is listening to our presentation so that it creates a more realistic experience for the user (“presenter”) More security measure Authentication
partial
## Inspiration We were inspired to create this as being computer science students we are always looking for opportunities to leverage our passion for technology by helping others. ## What it does Think In Sync is a platform that uses groundbreaking AI to make learning more accessible. It has features that enable it to generate images and audio with a selected text description. It works with a given audio as well, by generating the equivalent text or image. This is done so that children have an easier time learning according to their primary learning language. ## How we built it We built an interface and medium fidelity prototype using Figma. We used Python as our back end to integrate open AI's API. ## Challenges we ran into None of us have worked with API keys and authentication previously so that was new for all of us. ## Accomplishments that we're proud of We are proud of what we have accomplished given the short amount of time. ## What we learned We have extended our computer science knowledge out of the syllabus and we have learned more about collaboration and teamwork. ## What's next for Think In Sync Creating a high-fidelity prototype along with integrating the front end to the back end.
## Inspiration We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem. ## What it does The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality. The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material. ## How we built it We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student. ## Challenges we ran into One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site. ## Accomplishments that we’re proud of Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the ## What we learned We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python. ## What's next for Gradian The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
## Inspiration The inspiration for **Insightbot** came from the growing demand for personalized learning and productivity tools that can support users in managing their tasks and goals more effectively. As students, we often found ourselves struggling to balance multiple responsibilities, while searching for tailored information or assistance at critical moments. We realized that there was a need for a tool that not only helps users stay organized with to-do lists and goals tracking but also provides on-demand learning support based on their personal data, like notes or resources. This inspired us to combine the power of **Retrieval-Augmented Generation (RAG)** with a productivity suite to create a seamless and personalized learning experience, accessible anytime. ## What it does InsightBot is an intelligent study tool powered by **RAG** that delivers a personalized learning experience, helping users achieve their goals. Our platform offers multiple tools that helps users to stay organized, achieve their goals, and receive tailored support to overcome learning challenges, ensuring continuous progress at any time. ## How we built it Using **NextJS**, we started our project with UI/UX development. Following, we integrated an AI chatbot with **Open AI API key** and downstreamed the chatbot with **RAG** by utilizing **Open AI embeddings**, **Pinecone** database, and **Pinecone API**. Then, with **Python**, we developed backend and handled file upload, to do list, and goal tracker features. ## Challenges we ran into We encountered an issue integrating **RAG** into the app due to recent updates in OpenAI's embedding documentation. To resolve this, we needed to update the code to align with the latest **OpenAI API** for vector embeddings. ## Accomplishments that we're proud of We're proud of successfully integrating **RAG** within the chatbot and the goal tracker. The chatbot provides personalized learning assistance based on user-uploaded documents, while the goal tracker provides information on whether the added objectives are SMART (Specific, Measurable, Achievable, Relevant, and Time-bound). Overcoming the challenge of adapting to the updated **OpenAI API** for embeddings was a significant accomplishment. Additionally, we developed a seamless UI/UX offering users an intuitive and productive experience. ## What we learned Throughout this project, we deepened our understanding of **RAG** and how to effectively integrate it with APIs like **OpenAI** and **Pinecone**. We also learned to adapt quickly to changes in documentation and APIs, improving our problem-solving abilities. On the front-end, we gained valuable experience in creating user-friendly interfaces with **NextJS**, while on the back-end, we sharpened our skills in database management and **API** integration. Additionally, collaboration taught us how to manage our time effectively and prioritize tasks under tight deadlines. ## What's next for Insightbot We plan to enhance **Insightbot** by allowing user to upload more than one file, and supporting more data types like videos and links. We’re also exploring how to create tasks for the to-do list automatically based on a selected SMART goal using Open AI's API.
partial
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
### 💡 Inspiration 💡 We call them heroes, **but the support we give them is equal to the one of a slave.** Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record. Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.** ### ❓ What it does ❓ My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software. ### 🏗️ How we built it 🏗️ The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend. ### 🚧 Challenges we ran into 🚧 We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges. ### ✅ Accomplishments that we're proud of ✅ The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done. ### 🙋‍♂️ What we learned 🙋‍♂️ We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic. ### 💭 What's next for My MedicAid 💭 We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient.
## Inspiration Have you ever lost a valuable item that’s really important to you, only for it to never be seen again? * Over **60% of people** have lost something in their lifetime. * In the **US alone**, over **400 million items** are lost and found every year. * The average person loses up to **nine items** every day. The most commonly lost items include **wallets, keys, and phones**. While some are lucky enough to find their lost items at home or in their car, those who lose things in public often never see them again. The good news is that most places have a **“lost and found”** system, but the problem? It's **manual**, requiring you to reach out to someone to find out if your item has been turned in. ## What it does **LossEndFound** solves this problem by **automating the lost and found process**. It connects users who report lost items with those who find them. * Whether you're looking for something or reporting something found, the system uses **AI-powered vector similarity search** to match items based on descriptions provided by users. ## How we built it We built **LossEndFound** to make reconnecting lost items with their owners **seamless**: * **FastAPI** powers our backend for its speed and reliability. * **Cohere embeddings** capture the key features of each item. * **ChromaDB** stores and performs vector similarity searches, matching lost and found items based on cosine similarity. * On the frontend, we used **React.js** to create a user-friendly experience that makes the process quick and easy. ## Challenges we ran into As first-time hackers, we faced a few challenges: * **Backend development** was tough, especially when handling **numpy array dimensions**, which slowed us down during key calculations. * **Frontend-backend integration** was a challenge since it was our first time bridging these systems, making the process more complex than expected. ## Accomplishments that we're proud of We’re proud of how we pushed ourselves to learn and integrate new technologies: * **ChromaDB**, **Cohere**, and **CORS** were all new tools that we successfully implemented. * Overcoming these challenges showed us what’s possible when we **step outside our comfort zone** and **collaborate effectively**. ## What we learned We learned several key lessons during this project: * The importance of **clear requirements** to guide development. * How to navigate new technologies under pressure. * How to **grow, adapt, and collaborate** as a team to tackle complex problems. ## What's next for LossEndFound Moving forward, we plan to: * Add **better filters** for more precise searches (by date, location, and category). * Introduce **user profiles** to track lost/found items. * Streamline the process for reporting or updating item statuses. These improvements will make the app even more **efficient** and **user-friendly**, keeping the focus on **simplicity and effectiveness**.
winning
## Inspiration Bill - "Blindness is a major problem today and we hope to have a solution that takes a step in solving this" George - "I like engineering" We hope our tool gives nonzero contribution to society. ## What it does Generates a description of a scene and reads the description for visually impaired people. Leverages CLIP/recent research advancements and own contributions to solve previously unsolved problem (taking a stab at the unsolved **generalized object detection** problem i.e. object detection without training labels) ## How we built it SenseSight consists of three modules: recorder, CLIP engine, and text2speech. ### Pipeline Overview Once the user presses the button, the recorder beams it to the compute cluster server. The server runs a temporally representative video frame through the CLIP engine. The CLIP engine is our novel pipeline that emulates human sight to generate a scene description. Finally, the generated description is sent back to the user side, where the text is converted to audio to be read. [Figures](https://docs.google.com/presentation/d/1bDhOHPD1013WLyUOAYK3WWlwhIR8Fm29_X44S9OTjrA/edit?usp=sharing) ### CLIP CLIP is a model proposed by OpenAI that maps images to embeddings via an image encoder and text to embeddings via a text encoder. Similiar (image, text) pairs will have a higher dot product. ### Image captioning with CLIP We can map the image embeddings to text embeddings via a simple MLP (since image -> text can be thought of as lossy compression). The mapped embedding is fed into a transformer decoder (GPT2) that is fine-tuned to produce text. This process is called CLIP text decoder. ### Recognition of Key Image Areas The issue with Image captioning the fed input is that an image is composed of smaller images. The CLIP text decoder is trained on only images containing one single content (e.g. ImageNet/MS CoCo images). We need to extract the crops of the objects in the image and then apply CLIP text decoder. This process is called **generalized object detection** **Generalized object detection** is unsolved. Most object detection involves training with labels. We propose a viable approach. We sample crops in the scene, just like how human eyes dart around their view. We evaluate the fidelity of these crops i.e. how much information/objects the crop contains by embedding the crop using clip and then searching a database of text embeddings. The database is composed of noun phrases that we extracted. The database can be huge, so we rely on SCANN (Google Research), a pipeline that uses machine learning based vector similarity search. We then filter all subpar crops. The remaining crops are selected using an algorithm that tries to maximize the spatial coverage of k crop. To do so, we sample many sets of k crops and select the set with the highest all pairs distance. ## Challenges we ran into The hackathon went smoothly, except for the minor inconvenience of getting the server + user side to run in sync. ## Accomplishments that we're proud of Platform replicates the human visual process with decent results. Subproblem is generalized object detection-- proposed approach involving CLIP embeddings and fast vector similarity search Got hardware + local + server (machine learning models on MIT cluster) + remote apis to work in sync ## What's next for SenseSight Better clip text decoder. Crops tend to generate redundant sentences, so additional pruning is needed. Use GPT3 to remove the redundancy and make the speech flower. Realtime can be accomplished by using real networking protocols instead of scp + time.sleep hacks. To accelerate inference on crops, we can do multi GPU. ## Fun Fact The logo is generated by DALL-E :p
## What it does "ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points. ## How We built it Our project is comprised of many interconnected components, which we detail below: #### Formatting Engine To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required. #### Voice-to-speech We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed. #### Topic Analysis Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification. #### Image Scraping Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen. #### Graph Generation Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time. #### Sentence Segmentation When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline. #### Text Title-ification Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title. #### Text Summarization When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous. #### Mobile Clicker Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets. #### Internal Socket Communication In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides. ## Challenges We ran into * Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis. * The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop. ## Accomplishments that we're proud of * Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. * Working on an unsolved machine learning problem (sentence simplification) * Connecting a mobile device to the laptop browser’s mic using WebSockets * Real-time text analysis to determine new elements ## What's next for ImpromPPTX * Predict what the user intends to say next * Scraping Primary sources to automatically add citations and definitions. * Improving text summarization with word reordering and synonym analysis.
## Inspiration: Whenever people are on the car, especially for long-distance road trips, people get bored. Many people are motion sick, and constantly staring at texts on a book will cause motion sickness. As such, our app enables anyone who prefers the book to be read to them instead of reading the book themselves to do just that. There is a catch: the voices used in the narration are generated based on the sentiment and content analysis of the phrases. Therefore, characters will act out interesting conversations and are therefore a lot more engaging that previously existing automated text-to-audio models. Also, images will be projected for each scene such that it also suits younger audiences who like visual effects. ## What it does Given some text (in the form of screenshots or real time photos taken with your phone), our web app interprets the text, create various different voices (with different pace, volume, pitch) based on the sentiment and content analysis of each character's phrases, and plays out the scene. At the same time, a picture is generated for each scene (arbitrary scene) and is displayed on both the phone screen as well as the Arduino LCD. This allows the users to get another dimension of information, and this is especially useful when kids are using this web app to read books. ## How we built it We started off with a block diagram linking all components of the app, and we independently tested all the various components. We used a lot of Google Cloud APIs (Vision, Text-to-speech, Natural Language) in the process of developing our app, and more specifically, we included OCR, sentiment analysis, content analysis just to name a few. As we got each component working, we incrementally built feature by feature. The first feature is audio from image, then varying the pitch/speed based on the sentiment calculation. After these, we worked on content analysis, and using the results of the content analysis, we made our own Google Custom search engine to perform image searching. Then we feed the results of image search back to the phone as well as the Arduino. Arduino receives the BMP version fo the image search results and displays each image using a bitmap that is generated for each image. At last, we made an app that integrates both the picture taking, audio outputting, visual/image outputting. ## Challenges we ran into We initially wanted to use the Raspberry PI camera as the main camera to take pictures, however, the Raspberry PIs that we received at the Hackathon couldn't boot up, so we had to resort to using Arduino Photon to receive the BMP file which caused a lot of additional overhead. ## Accomplishments that we're proud of The quality of the outputs are surprisingly good. The conversations are engaging and very entertaining to the listener. We also flip the gender of the characters randomly to make it more interesting. ## What we learned 1. Clear up questions regarding APIs early on during a hackathon to prevent wasting time on something easy to solve. 2. Start off the braining storming for ideas more systematically. For example, have a deadline for the project idea to be decided on so that we do not waste time braining storming, but rather on the actual coding/designing of the project. 3. Talk to other teams about how to use certain tools! Do not limit yourself to only asking for advice from the mentors. Other teams and other hackers are usually down to help out and very insightful! ## What's next for Visual Audio 1. Add AR effects to the images. So instead of the images being displayed on the phone screen/Arduino LCD, we project these images using AR technologies so it's even more engaging, especially for children. 2. Add more robust context analysis for gender inference.
winning
## Inspiration Cargo ships are the primary ways of transporting produce across oceans. However, their sensitive nature makes them sensitive to degrading in transit -- resulting in unnecessary waste. About 33% of global fresh produce is thrown away due to their quality degrading during shipment. Additionally, every year, at the US-Mexican border, 35-45 millions pounds of fruits and vegetables are thrown away due to not meeting standards. This hurts both consumers and suppliers alike. ## What it does Uses sensors, Computer Vision, and ML to improve the efficiency of current supply chain management. Using IOT, we build smart containers that can detect if a produce is fresh or not and then creates a bidding system based on how fresh a produce is and uses it to distribute it. [Input]: Suppliers create a product page on their shipment and sync the device to it. [Bidding]: Prospective buyers can bid on the product shipment by inputting two parameters: their bid amount and their maximum freshness threshold. After the bid winning, it is locked to them. [Monitoring/Rebidding]: The order is shipped and monitored by the hardware to provide interested parties with details such as location, humidity, temperature, CO2, and the like. If it falls below a set freshness threshold, the customer can back-out and re-open bidding. Otherwise, it works like a typical B2B ordering site and remains locked to the customer. The freshness score is calculated by using an Ensemble Machine Learning approach that incorporates multivariate Ordinary Least Squares and Computer Vision to predict how fresh a produce is. The image is then updated onto the database after ever hour.
## Inspiration Almost 2.2 million tonnes of edible food is discarded each year in Canada alone, resulting in over 17 billion dollars in waste. A significant portion of this is due to the simple fact that keeping track of expiry dates for the wide range of groceries we buy is, put simply, a huge task. While brainstorming ideas to automate the management of these expiry dates, discussion came to the increasingly outdated usage of barcodes for Universal Product Codes (UPCs); when the largest QR codes can store [thousands of characters](https://stackoverflow.com/questions/12764334/qr-code-max-char-length), why use so much space for a 12 digit number? By building upon existing standards and the everyday technology in our pockets, we're proud to present **poBop**: our answer to food waste in homes. ## What it does Users are able to scan the barcodes on their products, and enter the expiration date written on the packaging. This information is securely stored under their account, which keeps track of what the user has in their pantry. When products have expired, the app triggers a notification. As a proof of concept, we have also made several QR codes which show how the same UPC codes can be encoded alongside expiration dates in a similar amount of space, simplifying this scanning process. In addition to this expiration date tracking, the app is also able to recommend recipes based on what is currently in the user's pantry. In the event that no recipes are possible with the provided list, it will instead recommend recipes in which has the least number of missing ingredients. ## How we built it The UI was made with native android, with the exception of the scanning view which made use of the [code scanner library](https://github.com/yuriy-budiyev/code-scanner). Storage and hashing/authentication were taken care of by [MongoDB](https://www.mongodb.com/) and [Bcrypt](https://github.com/pyca/bcrypt/) respectively. Finally in regards to food, we used [Buycott](https://www.buycott.com/)'s API for UPC lookup and [Spoonacular](https://spoonacular.com/) to look for valid recipes. ## Challenges we ran into As there is no official API or publicly accessible database for UPCs, we had to test and compare multiple APIs before determining Buycott had the best support for Canadian markets. This caused some issues, as a lot of processing was needed to connect product information between the two food APIs. Additionally, our decision to completely segregate the Flask server and apps occasionally resulted in delays when prioritizing which endpoints should be written first, how they should be structured etc. ## Accomplishments that we're proud of We're very proud that we were able to use technology to do something about an issue that bothers not only everyone on our team on a frequent basis (something something cooking hard) but also something that has large scale impacts on the macro scale. Some of our members were also very new to working with REST APIs, but coded well nonetheless. ## What we learned Flask is a very easy to use Python library for creating endpoints and working with REST APIs. We learned to work with endpoints and web requests using Java/Android. We also learned how to use local databases on Android applications. ## What's next for poBop We thought of a number of features that unfortunately didn't make the final cut. Things like tracking nutrition of the items you have stored and consumed. By tracking nutrition information, it could also act as a nutrient planner for the day. The application may have also included a shopping list feature, where you can quickly add items you are missing from a recipe or use it to help track nutrition for your upcoming weeks. We were also hoping to allow the user to add more details to the items they are storing, such as notes for what they plan on using it for or the quantity of items. Some smaller features that didn't quite make it included getting a notification when the expiry date is getting close, and data sharing for people sharing a household. We were also thinking about creating a web application as well, so that it would be more widely available. Finally, we strongly encourage you, if you are able and willing, to consider donating to food waste reduction initiatives and hunger elimination charities. One of the many ways to get started can be found here: <https://rescuefood.ca/> <https://secondharvest.ca/> <https://www.cityharvest.org/> # Love, # FSq x ANMOL
## Inspiration We wanted to make a computer vision user app that detected if a fruit was good to eat or not based on its discolouration/irregularities after picking up a few discoloured/bruised oranges at lunch on the first day of McHacks. ## What it does It uses scikit-image to detect edges using the canny algorithm, which it then filters with a Gaussian distribution to subtract noise. It uses the edges to create a mask to filter out the background, which it feeds into a blob detection (difference of Gaussian) method with specific parameters to extract the moldy blobs/irregularities. The final result plots the original image, the edges detected on it, the mask applied to the edge detection and the blobs found using the doG method. The backend is done in Python and the frontend is a basic user UI to upload jpegs/jpgs/pngs to. ## Challenges we ran into Edge inconsistencies are harder to detect than we thought. We originally wanted to determine how far a bruised orange deviates from a 'perfect' orange shape. ## What's next for produce sort Making a better user interface (an actual landing page) and maybe using tensorflow to get a better idea of food that is safe to consume based on its appearance.
partial
The Book Reading Bot (brb) programmatically flips through physical books, and using TTS reads the pages aloud. There are also options to download the pdf or audiobook. I read an article on [The Spectator](http://columbiaspectator.com/) how some low-income students cannot afford textbooks, and actually spend time at the library manually scanning the books on their phones. I realized this was a perfect opportunity for technology to help people and eliminate repetitive tasks. All you do is click start on the web app and the software and hardware do the rest! Another use case is for young children who do not know how to read yet. Using brb, they can read Dr. Seuss alone! As kids nowadays spend too much time on the television, I hope this might lure kids back to children books. On a high level technical overview, the web app (bootstrap) sends an image to a flask server which uses ocr and tts.
## What it does "ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points. ## How We built it Our project is comprised of many interconnected components, which we detail below: #### Formatting Engine To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required. #### Voice-to-speech We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed. #### Topic Analysis Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification. #### Image Scraping Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen. #### Graph Generation Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time. #### Sentence Segmentation When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline. #### Text Title-ification Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title. #### Text Summarization When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous. #### Mobile Clicker Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets. #### Internal Socket Communication In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides. ## Challenges We ran into * Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis. * The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop. ## Accomplishments that we're proud of * Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques. * Working on an unsolved machine learning problem (sentence simplification) * Connecting a mobile device to the laptop browser’s mic using WebSockets * Real-time text analysis to determine new elements ## What's next for ImpromPPTX * Predict what the user intends to say next * Scraping Primary sources to automatically add citations and definitions. * Improving text summarization with word reordering and synonym analysis.
## Inspiration In recent years, especially post-COVID, online shopping has become extremely common. One big issue when shopping online is that users are unable to try on clothes before ordering them. This results in people getting clothes that end up not fitting or not looking great, which is something nobody wants. In addition, many people face constant difficulties in their life that limit their This gave us the inspiration to create Style AI as a way to let people try on clothes virtually before ordering them online. ## What it does Style AI takes a photo of you and analyzes the clothes you are currently wearing and gives detailed clothing recommendations of specific brands, shirt types, and colors. Then, the user has the option to try on each of the recommendations virtually. ## How we built it We used OpenCV to capture a photo of the user. Then the image is inputted to Gemini API to generate a list of clothing recommendations. These recommendations are then passed into google shopping API, which uses google search to find where the user can buy the recommended clothes. Then, we filter through the results to find clothes that have the correct image format. The image of the shirt is superimposed onto a live OpenCV video stream of the user. To overlay the shirt on the user, we segmented the shirt image into 3 sections: left sleeve, center, and right sleeve. We also perform segmentation on the user using MediaPipe. Then, we warp each segment of the shirt onto the user's body in the video stream. We made the website using Reflex. ## Challenges we ran into The shirt overlay aspect was much more challenging than expected. At first, we planned to use a semantic segmentation model for the shirt of the user because then we could warp and transform the shape of the real shirt to the shirt mask on the user. The issue was that semantic segmentation was very slow so the shirt wasn't able to overlay on the user in real-time. We solved this by using a combination of various OpenCV functions so the shirt could be overlaid in real-time. ## Accomplishments that we're proud of We are proud of every part of our project, since each required lots of research, and we are all proud of the individual contributions to the project. We are also proud that we were able to overcome many challenges and adapt to things that went wrong. Specifically, we were proud that we were able to use a completely new framework, reflex, which allowed us to work in python natively across both the frontend and the backend. ## What we learned We learned how to use Reflex to create websites. We also learned how to use APIs. Also, we learned about more functionalities of MediaPipe and OpenCV when writing the shirt overlay code. ## What's next for Style AI Expand Style AI for all types of clothing such as pants and shoes. Implementation of a "bulk order" functionality allowing users to order across online retailers. Add more personalized recommendations. Enable real-time voice assisted chat bot conversations to simulate talking to a fashion expert in-person.
winning
## Inspiration As enrollment season approached, one of our team members found themselves endlessly scrolling through Reddit, desperately seeking insights into specific Math 104 professors. Their exhaustive search left them empty-handed, prompting us to consider how valuable the wealth of knowledge and firsthand experiences shared by students on Reddit could be for the wider academic community. With the recent surge in Reddit's popularity, it only seemed fitting to harness this collective wisdom for the greater good. ## What it does Recap is a platform designed to assist students in finding relevant information about courses and professors. Users input the name of a course and professor, and Recap provides a curated selection of comments and posts related to that class. It also offers insights such as sentiment analysis for each comment and an assessment of the overall difficulty of the class. ## How we built it Our journey began with the integration of the Reddit API, known as PRAW. We set up a developer account on Reddit to gain access to their database and enable us to perform queries effectively. PRAW's innate ability to sort posts and comments by relevance was a substantial time-saver, allowing us to focus on extracting insights. We devised a preliminary algorithm for measuring course difficulty, which can undoubtedly be refined in the future. To determine sentiment, we employed Vader Sentiment for sentiment analysis, and for the sake of clarity and accessibility, we transformed sentiment scores into emojis. While some of the metrics were simplified due to time constraints, they served our purpose well. ## Challenges we ran into Initially, we explored the possibility of using Convex for our project, but we soon realized that we were faced with a learning curve that exceeded our technical expertise, particularly concerning Backend TypeScript. Reflex appeared to be a promising alternative, but server connection issues hindered our progress. Two-thirds of our team underestimated the complexity of building a web app, especially the Frontend development aspect. ## Accomplishments that we're proud of We take pride in bridging the Reddit community with students seeking invaluable information that was previously shared only through word of mouth. Our sentiment analysis and course difficulty assessments equip students with realistic expectations, enabling them to make informed academic choices confidently. ## What we learned This journey revealed that full-stack development is significantly more challenging than it might appear at first. Despite our limited knowledge of Frontend technologies like CSS and HTML, we managed to create a functional web app by using Flask and adding features like emojis in our final table view. We also learned about the rapid pace of real-world technology. In terms of data analysis, we made intriguing discoveries about the Berkeley subreddit (and about how much harder our classes are to Stanfords') and the vocabulary used to describe course experiences. We also encountered controversial insights about the popularity of certain professors compared to others. ## What's next for Recap Recap is poised to become an essential tool for every student. We envision its integration with websites like Berkeleytime to expand its reach. As we continue to refine our product, we aim to enhance its mathematical underpinnings, comprehend the limitations and contexts of our results, and improve accuracy through rigorous testing. There are glaring issues with data cleaning and accessibility that need to be fixed. While assessing course difficulty remains subjective, we are committed to refining our approach and striving for greater precision.
## Inspiration Loneliness affects countless people and over time, it can have significant consequences on a person's mental health. One quarter of Canada's 65+ population live completely alone, which has been scientifically connected to very serious health risks. With the growing population of seniors, this problem only seems to be growing worse, and so we wanted to find a way to help both elderly citizens take care of themselves and their loved ones to take care of them. ## What it does Claire is an AI chatbot with a UX designed specifically for the less tech-savvy elderly population. It helps seniors to journal and self-reflect, both proven to have mental health benefits, through a simulated social experience. At the same time, it allows caregivers to stay up-to-date on the emotional wellbeing of the elderly. This is all done with natural language processing, used to identify the emotions associated with each conversation session. ## How we built it We used a React front-end served by a node.js back-end. Messages were sent to Google Cloud's natural language processing API, where we could identify emotions for recording and entities for enhancing the simulated conversation experience. Information on user activity and profiles are maintained in a Firebase database. ## Challenges we ran into We wanted to use speech-to-text so as to reach an even broader seniors' market, but we ran into technical difficulties with streaming audio from the browser in a consistent way. As a result, we chose simply to have a text-based conversation. ## Accomplishments that we're proud of Designing a convincing AI chatbot was the biggest challenge. We found that the bot would often miss contextual cues, and interpret responses incorrectly. Over the course of the project, we had to tweak how our bot responded and prompted conversation so that these lapses were minimized. Also, as developers, it was very difficult to design to the needs of a less-tech-saavy target audience. We had to make sure our application was intuitive enough for all users. ## What we learned We learned how to work with natural language processing to follow a conversation and respond appropriately to human input. As well, we got to further practise our technical skills by applying React, node.js, and Firebase to build a full-stack application. ## What's next for claire We want to implement an accurate speech-to-text and text-to-speech functionality. We think this is the natural next step to making our product more widely accessible.
## **Inspiration:** Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn. ## **What it does:** Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations. ## **How we built it**: We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders. ## **Challenges we ran into**: Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning. ## **Accomplishments that we're proud of**: Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly. ## **What we learned**: We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
losing
## Inspiration Our team focuses primarily on implementing innovative technologies to spark positive change. In line with these ideals, we decided to explore the potential impact of a text reading software that not only provided an additional layer of accessibility for blind and deafblind individuals. Through these efforts, we discovered that despite previous attempts to solve similar problems, solutions were often extremely expensive and incongruous. According to many visually challenged advocates these technologies often lacked real-world application and were specific to online texts or readings which limited their opportunities in everyday tasks like reading books or scanning over menus. Upon further research into afflicted groups, we discovered that there was additionally a large population of people who were both deaf and blind which stopped them from utilizing any forms of auditory input as an alternative, significantly obstructing their means of communication. Employing a very human centered design rooted in various personal accounts and professional testimony, we were able to develop a universal design that provides the visually and dual sensory impaired to experience the world from a new lens. By creating a handheld text to braille and speech generator, we are able to revolutionize the prospects of interpersonal communication for these individuals. ## What it does This solution utilizes a singular piece with two modules, a video camera to decipher text, and a set of solenoids that imitates a standard Grade 2 Braille Grid. This portable accessory is intended to be utilized by a visually impaired or blind deaf individual when they’re attempting to analyze a physical text. This three finger supplement, equipped with a live action camera and sensitive solenoid components, is capable of utilizing a live camera feed to discern the diction of a physical text. Moving forward, the scanned text is moved to an A.I application to clean up the text for either auditory or sensory output in the form of TTS and braille. The text is then adapted into an audio format through a web application or to the classic 6 cells present in the Braille dictionary. Users are given a brief moment to make sense of each braille letter before the system automatically iterates through the remainder of the text. This technology effectively provides users with an alternative method to receive information that isn’t ordinarily accessible to them, granting a more authentic and amplified understanding of the world around them. In this unique application of these technologies, those who are hard of seeing and/or dual sensory impaired receive a more genuine appreciation for texts. ## How we built it As our project required two extremely different pieces, we decided to split up our resources in hopes of tackling both problems at the same time. Regardless, we needed to set firm goals and plan out our required resources or timeline which helped us stay on schedule and formulate a final product that fully utilized our expertise. In terms of hardware, we were somewhat limited for the first half of the hackathon as we had to purchase many of our materials and were unable to complete much of this work till later. We started by identifying a potential circuit design and creating a rigid structure to house our components. From there we simply spent a large amount of time actually implementing our theoretical circuit and applying it to our housing model in addition to cleaning the whole product up. For software, we mostly had problems with connecting each of the pieces after building them out. We first created an algorithm that could take a camera feed and produce a coherent string of text. This would then be run through an AI text to speech generator that could decipher any gibberish. Finally, these texts would be sent through to either be read out loud or be compared against a dictionary to create a binary code that would dictate the on/off states off our solenoids. Finally, we prototyped our product and tested it to see what we could improve in our final implementation to both increase efficiency and decrease latency. ## Challenges we ran into This project was extremely technical and ambitious which meant that it was plagued with difficulties. As a large portion of the project relied on its hardware and implementing complementary materials to formulate a cohesive product, there were countless problems throughout the building phase. We often had incompatible parts whether it be cables, Voltage output/input, or even sizing and scaling issues, we were constantly scrambling to alter, scavenge, and adapt materials for uncommon use cases. Even our main board didn’t produce enough power, leading to an unusual usage of a dehumidifier charger and balled up aluminum foil as a makeshift power bank. All of these mechanical complexities followed by a difficult software end of the project led to an innovative and reworked solution that maintained applicative efficiency. These modifications even continued just hours before the submission deadline when we revamped the entire physical end of our project to make use of newly acquired materials using a more efficient modeling technique. These last second improvements gave our product a more polished and adept edge, making a more impactful and satisfying design. Software wise we also strove to uncover the underappreciated features from our various APIs and tools which often didn’t coincide with our team’s strengths. As we had to simultaneously build out an effective product while troubleshooting our software side, we often ran into incompetencies and struggles. Regardless, we were able to overcome these adversities and produce an impressive result. ## Accomplishments that we're proud of We are proud that we were able to overcome the various difficulties that arose throughout our process and to still showcase the level of success that we did even given such a short timeframe. Our team came in with some members having never done a hackathon before and we made extremely ambitious goals that we were unsure we could uphold. However, we were able to effectively work as a team to develop a final product that clearly represents our initial intentions for the project. ## What we learned As a result of the many cutting-edge sponsors and new technological constraints, our whole team was able to draw from new more effective tools to increase efficiency and quality of our product. Through our careful planning and consistent collaboration, we experienced the future of software and progressed in our intricate technical knowledge within our fields and across specializations. and Because of the cross discipline nature of this project. Additionally, we became more flexible with what materials we needed to build out our hardware applications and especially utilized new TTS technologies to amplify the impact of our projects. In the future, we intend to continue to develop these crucial skills that we obtained at Cal Hacks 11.0, working towards a more accessible future. ## What's next for Text to Dot We would like to work on integrating a more refined design to the hardware component of our project. Unforeseen circumstances with the solenoid led to our final design needing to be adjusted beyond the design of the original model, which could be rectified in future iterations.
## Inspiration We wanted to create a device that ease the life of people who have disabilities and with AR becoming mainstream it would only be proper to create it. ## What it does Our AR Headset converts speech to text and then displays it realtime on the monitor to allow the user to read what the other person is telling them making it easier for the first user as he longer has to read lips to communicate with other people ## How we built it We used IBM Watson API in order to convert speech to text ## Challenges we ran into We have attempted to setup our system using the Microsoft's Cortana and the available API but after struggling to get the libraries ti work we had to resort to using an alternative method ## Accomplishments that we're proud of Being able to use the IBM Watson and unity to create a working prototype using the Kinect as the Web Camera and the Oculus rift as the headset thus creating an AR headset ## What we learned ## What's next for Hear Again We want to make the UI better, improve the speed to text recognition and transfer our project over to the Microsoft Holo Lens for the most nonintrusive experience.
## Inspiration We created this app to address a problem that our creators were facing: waking up in the morning. As students, the stakes of oversleeping can be very high. Missing a lecture or an exam can set you back days or greatly detriment your grade. It's too easy to sleep past your alarm. Even if you set multiple, we can simply turn all those off knowing that there is no human intention behind each alarm. It's almost as if we've forgotten that we're supposed to get up after our alarm goes off! In our experience, what really jars you awake in the morning is another person telling you to get up. Now, suddenly there is consequence and direct intention behind each call to wake up. Wake simulates this in an interactive alarm experience. ## What it does Users sync their alarm up with their trusted peers to form a pact each morning to make sure that each member of the group wakes up at their designated time. One user sets an alarm code with a common wakeup time associated with this alarm code. The user's peers can use this alarm code to join their alarm group. Everybody in the alarm group will experience the same alarm in the morning. After each user hits the button when they wake up, they are sent to a soundboard interface, where they can hit buttons to send try to wake those that are still sleeping with real time sound effects. Each time one user in the server hits a sound effect button, that sound registers on every device, including their own device to provide auditory feedback that they have indeed successfully sent a sound effect. Ultimately, users exit the soundboard to leave the live alarm server and go back to the home screen of the app. They can finally start their day! ## How we built it We built this app using React Native as a frontend, Node.js as the server, and Supabase as the database. We created files for the different screens that users will interact with in the front end, namely the home screen, goodnight screen, wakeup screen, and the soundboard. The home screen is where they set an alarm code or join using someone else's alarm code. The "goodnight screen" is what screen the app will be on while the user sleeps. When the app is on, it displays the current time, when the alarm is set to go off, who else is in the alarm server, and a warm message, "Goodnight, sleep tight!". Each one of these screens went through its own UX design process. We also used Socket.io to establish connections between those in the same alarm group. When a user sends a sound effect, it would go to the server which would be sent to all the users in the group. As for the backend, we used Supabase as a database to store the users, alarm codes, current time, and the wake up times. We connected the front and back end and the app came together. All of this was tested on our own phones using Expo. ## Challenges we ran into We ran into many difficult challenges during the development process. It was all of our first times using React Native, so there was a little bit of a learning curve in the beginning. Furthermore, incorporating Sockets with the project proved to be very difficult because it required a lot of planning and experimenting with the server/client relationships. The alarm ringing also proved to be surprisingly difficult to implement. If the alarm was left to ring, the "goodnight screen" would continue ringing and would not terminate. Many of React Native's tools like setInterval didn't seem to solve the problem. This was a problematic and reoccurring issue. Secondly, the database in Supabase was also quite difficult and time consuming to connect, but in the end, once we set it up, using it simply entailed brief SQL queries. Thirdly, setting up the front end proved quite confusing and problematic, especially when it came to adding alarm codes to the database. ## Accomplishments that we're proud of We are super proud of the work that we’ve done developing this mobile application. The interface is minimalist yet attention- grabbing when it needs to be, namely when the alarm goes off. Then, the hours of debugging, although frustrating, was very satisfying once we finally got the app running. Additionally, we greatly improved our understanding of mobile app developement. Finally, the app is also just amusing and fun to use! It’s a cool concept! ## What we learned As mentioned before, we greatly improved our understanding of React Native, as for most of our group, this was the first time using it for a major project. We learned how to use Supabase and socket. Additionally, we improved our general Javascript and user experience design skills as well. ## What's next for Wakey We would like to put this app on the iOS App Store and the Android Play Store, which would take more extensive and detailed testing, especially as for how the app will run in the background. Additionally, we would like to add some other features, like a leaderboard for who gets up most immediately after their alarm gets off, who sends the most sound effects, and perhaps other ways to rank the members of each alarm server. We would also like to add customizable sound effects, where users can record themselves or upload recordings that they can add to their soundboards.
partial
## Inspiration & Instructions We wanted to somehow guilt people into realizing the state of their bank accounts by showing them progressive picture reminders as their wallpaper. Hopefully, the people who use our app will want to save more and also maybe increase their earnings by investing in stocks, SPROUTING personal monetary growth. To use our app, you can simply install it on your phone. The APK link is below, and it is fully functional. When you first open Sprout, we ask for your bank account information. We then take you to the next screen which will show your current balance and let you set your baseline and goal amounts for your balance. Below that is the current status of your representative plant’s health based on these amounts. Be sure to check the toggle to change the wallpaper of your phone to the plant so that you’re always aware! You can also navigate to a “How To Invest” page from the menu where you can get up-to-date analytical estimations of how you could earn more money through investing. For a detailed demo, please see our video. ## What it does Sprout is an Android app to help students and the general populace know how their bank account is doing. It basically sees your current balance, takes the minimum threshold you don’t want your balance to go under and what you’d love to see your balance to be above. Then, the app shows you a cute plant representing the state of your bank account, either living comfortably, living luxuriously, or dying miserably. It will update your phone background accordingly so that you would be able to know at all times. You can also get to a “How To Invest” page, which can briefly educate you on how you could earn more money through investing. ## How we built it Two of us had experience with Android Development, so we decided we wanted to make an Android app. We used Android Studio as our IDE and Java as our language of choice. (For our plant designs, we used Adobe Illustrator.) To simulate information about a possible user’s account balance, we used the MINT API to fetch financial data. In order to incentivize our users to maybe invest their savings, we used the NASDAQ API to get stock information and used that to project earnings from the user’s balance had they invested some of it in the past. We offer some brief advice on how to start investing for beginners as well. ## Challenges we ran into Random small bugs, but we squashed the majority of them. Our biggest problem was thinking of a good idea we would be able to implement well in the time that we had! ## Accomplishments that we're proud of Our app has many features and a great design! ## What we learned We can get a lot done in a short amount of time :^D ## What's next for Sprout? Background app refresh to automatically check as transactions come in so that the most accurate plant can be shown. ## Built With * Java * Android Studio * NASDAQ API * Mint API * Adobe Illustrator (for Designs) ## Try it out Link to APK: <https://github.com/zoedt/yhack-2016/blob/master/app-debug.apk>
## Inspiration Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment. ## What it does Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk. ## How we built it * Smart Contract for the transfer of funds on the blockchain made using Solidity * A robust backend and authentication system made using node.js, express.js, and MongoDB. * Elegant front end made with react-native and Celo's SDK. ## Challenges we ran into Unfamiliar with the tech stack used to create this project and the BlockChain technology. ## What we learned We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps. ## What's next for *PoolNVest* Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds. Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
## Inspiration We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems. ## What it does Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well. ## How we built it We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed. ## Challenges we ran into Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app. ## Accomplishments that we're proud of Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson. ## What we learned Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs. ## What's next for AEye Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework. See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679>
winning
## Inspiration Current study resources for classrooms are not built to support long-term content retention through active learning and effective memory retrieval techniques. This is a detriment for both students and teachers. ### How does this affect students? With the combination of the rise of social media and the after-effects of COVID-19, many students have shortened attention spans and shorter windows of retention, often unable to retain key fundamentals past the unit they learn it in. This can lead to “dangerous knowledge gaps” as they progress to upper grade levels, where mastery of these fundamentals is essential. ### How does this affect teachers? Currently, there is a significant time investment teachers must make to create study materials for their students. Creating and/or finding engaging learning material while also closely tracking students' progress is challenging. Thus, teachers lose time every year trying to tackle these issues. Furthermore, if students lack fundamental understanding of subjects due to the aforementioned “dangerous knowledge gaps,” teachers become burdened with more responsibility and have to shift their curriculum to meet the needs of their students. This is an important problem because, when students don’t interact with the content in an impactful manner, they become disengaged and don’t commit what they learn to long-term memory. Thus, the burden on teachers to play “catch-up” continues to snowball, especially at higher grade levels. ## What it does Through our user research with various K-12 teachers in the state of Illinois, we were able to firsthand see how COVID-19 and social media impact **student focus**, and, consequently, their **content retention rates** and **literacy levels**. To address this, we introduce an application that **increases content retention and student engagement in classrooms** by combining the power of **active learning** with **flashcards through AI**. We **re-define the notion of what a flashcard can be** and present it as a **multipurpose, multimedia tool** that drives active learning. ## How it works **Teacher Experience** Teachers create flashcard sets for their classrooms, developing accurate, impactful and re-usable study materials for students to interface with. We allow teachers to build their curriculum for the future through our platform and create sets with ease. **(a) How can they create effective sets efficiently?**  1. Multimedia support → Can prompt our AI through uploading existing worksheets, quizzes, and exams. Our tool will return a set of flashcards based on the given material. These can be used for unit reviews, literacy practice, or exam wrappers. 2. Generative Content → Can prompt our AI through text to generate higher-level flashcard questions based on the levels of Bloom’s Taxonomy the educator wants to focus on * Remember vs Understand vs Apply vs Design 3. Iterative Creation: Once sets are created by our AI, through our focus on seamless, user-friendly design, teachers will be able to quickly review and modify the cards and their configurations as they deem fit before publishing. **(b) How can they incorporate active learning into flashcards?** 1. Students are more motivated when they can take more ownership in their learning. We allow for students to engage with flashcards through various active-learning formats including (but not limited to): 1. Written response: Encourage “In-your-own” word definitions and breakdowns * As the response will be checked using AI, students have the flexibility to explain concepts by putting them into their own words rather than having to practice “word-for-word memorization.” 2. Audio response: Speaking answers to flashcards rather than typing * For literacy acceleration, students can use flashcards to record themselves practicing reading with terms and can be assessed by AI. * For foreign languages, teachers can also choose different focus points, such as pronunciation, grammar, etc to be tracked by AI 3. Interleaving: Mixing multiple subjects or topics while studying * Teachers can choose to combine x% of older unit sets with newer units so students continuously gain exposure and practice recall. 4. Dual Encoding * Can generate images next to the flashcards automatically to help students associate content better through “dual-encoding” **Students Experience** **(a) How do students learn?** 1. Students actively practice recall by interacting with the flashcard in their own words and on their own terms. 2. In the future, students also will be able to access a personalized assistant chatbot that will act as a “teaching assistant or guide” that will help students by providing them intermediate, guidance questions while they solve problems. **(b) How do we maintain student focus and engagement?** 1. We will be utilizing the Pomodoro technique that suggests focused sessions of learning with gamified “brain breaks” in between.  We encourage healthy gamification where we don’t sacrifice the quality of learning for student engagement. 2. For every X cards or X minutes that students study, they will be able to play a mini drawing game for a small interval of time before beginning their next session. * Their drawings will be generated into an animated badge through generative AI. They will be able to store a collection of badges and contribute to a class gallery, incentivizing their participation. **Classroom Experience** **(a) How can teachers better understand student progress?** 1. Group metrics: Analyzes what concepts the classroom is struggling with and presents teacher analytics on areas to focus on reviewing 2. Student metrics: Analyzes each student’s performance and highlights to teachers students who may need extra support, provides granularity into each student (i.e. stopped trying, started trying but gave up, literacy issue, calculation errors, etc) **(b) How do we support various student groups?** 1. Using Internationalization and Language Learning Models for Content Translation, we make this accessible to students and teachers of all languages through providing translation of all content generated on the site ## How we built it We leveraged OpenAI API to perform a variety of tasks that involves multimedia * **Gpt-turbo-3.5 LLM** model performs (1) generating study sets from given text information (2) checking student answer against given definition (3) generating explanation and feedback for students + Throughout the process, we also experimented with prompt engineering * **Whisper-1** model performs speech-to-text recognition to convert students’ audio answer to text answer for AI to perform checks * **DALL·E 2** model performs image generation based on student drawing We leveraged **AWS** to interact with multimedia content, notably png and pdf files * S3: for temporary storage of user-uploaded files * Textract: for extracting text information from various file types such as pdf and png Our web application tech stack includes * Next.js * tRPC * NextAuth: for authentication with Google on GCP * Prisma: ORM for connecting to the database * ChakraUI and Tailwind CSS: for UI components and custom styling * MongoDB: for storing user, classroom, card, and set information ## Challenges we ran into Before working with ChatGPT to generate images based on stickers students drew, we tested what it could do using the chatbot. We would provide an image and a prompt, like “make a kid-friendly cartoon version of the provided image.” This resulted in the following transformation: We soon realized the OpenAI image generation API was not as robust as the end-user chat bot, and pivoted to temporarily using image masking and manipulation to create abstract art inspired by the students’ drawings. Another challenge is handling file transfer between the web application and OpenAI API. In our application, we had to handle multimedia including image, audio, and pdf files. To tackle this challenge, we leveraged different technologies from AWS S3 for file storage and access, Textract for file conversion, and MediaStream API with DataUrl manipulation to process media content. ## Accomplishments that we're proud of We’re really happy to see how our work came together on a project with a really large scope. It is intuitive to use and has a clean UI, which we wanted to emphasize since our target users are teachers and young students. Tackling the challenges in handling AI communication with multimedia content is especially exciting. Seeing the drawing minigame work was satisfying: from the image generation to the pomodoro technique to make the application engaging and good for retention. ## What we learned We learned a lot about AI, prompt engineering, our target market, handling multimedia content, and developing a good UI/UX. ## What's next for ActiveCard Our next steps include polishing up the application and user testing. As we have developed an MVP of our web app, we will get into contact with our teacher contacts and test ActiveCard out in their classrooms. We hope to get some early adopters and flesh out our metrics dashboard to convince districts to pick up the tool, at which point we can start monetizing the platform. Some larger features we plan to develop for a V1 of the application after user testing are interleaving sets and spaced repetition. These techniques will further utilize active learning in our application. We will also switch to using more specialized AI models to target each of the specific tasks we have (i.e. image generation, speech recognition, etc.). In addition, we’ll have our tool be able to generate more flashcards with given content (i.e. worksheets). Finally, we plan to create a more cohesive classroom experience by allowing students to view other students’ generated custom badges in a gallery view, and we will add additional group study features for peer-to-peer learning. We have designs for how the application will look after MVP user testing. We’re super excited to see where ActiveCard goes as we’re very passionate about the space, especially after talking to teachers and really understanding how deep-rooted the problem is!
## Inspiration Wanting to create an application that would benefit as many of our peers as possible ## What it does Tracks and manages various information about jobs the user has applied to. Various sorting algorithms and easy to use forms to track new applications ## How I built it Created an Express server connected to MongoDB to store applications. Deployed server using Digital Ocean and utilized React to create a Single Page Application that makes HTTP requests to server to access database. ## Challenges I ran into Deciding which features to aim to include in the final submission. For a specific challenge, utilizing asynchronous thunks in Redux proved to be challenging. ## Accomplishments that I'm proud of Utilizing Tailwind CSS on Front-end and MongoDB and Mongoose for Back-end. Neither team had prior experience with these technologies so learning so seamlessly on the fly is something we are very proud of. ## What I learned Several new frameworks, libraries, and design patterns. Additionally, learning how to utilize Agile development principles as a team. ## What's next for Seekr Finish missing core functionality that unfortunately did not make it to submission such as updating entries, application filtering, and implementing user registration and login in the Front-end to allow multiple users to track job applications. * Submission by Team 32 Jacob Gladman (JGladman#9874) Noah Shap (JohnJohnson#0585) Danielle Dizon (DanielleD#4938) Sunwoo Jeong (Sunwoo#8606)
## My Samy helps: **Young marginalized students** * Anonymous process: Ability to ask any questions anonymously without feeling judged * Get relevant resources: More efficient process for them to ask for help and receive immediately relevant information and resources * Great design and user interface: Easy to use platform with kid friendly interface * Tailored experience: Computer model is trained to understand their vocabulary * Accessible anytime: Replaces the need to schedule an appointment and meet someone in person which can be intimidating. App is readily available at any time, any place. * Free to use platform **Schools** * Allows them to support every student simultaneously * Provides a convenient process as the recommendation system is automatized * Allows them to receive a general report that highlights the most common issues students experience **Local businesses** * Gives them an opportunity to support their community in impactful ways * Allows them to advertise their services Business Plan: <https://drive.google.com/file/d/1JII4UGR2qWOKVjF3txIEqfLUVgaWAY_h/view?usp=sharing>
losing
## Inspiration When ideating on Friday, we were inspired by the topics around providing more accessibility using bleeding edge technologies. We knew that we wanted to make something genuinely cool and technically challenging, but also something that provides real value to underserved users. We decided to target impaired individuals, as 1 in 9 Americans are physically impaired to some degree, but are underserved. We saw a huge problem with the current offerings in the accessibility automation space -- and found a problem that was technically challenging but rewarding to create. ## What it does SpeakEasy is a fully featured AI-powered browser automation tool. It allows you to browse the web and get information without needing to touch or see your browser at all. ## How we built it This project revolves around several different AI agent 'actors' equipped with different tools. The user interfaces with a conversational assistant using language and voice models that provide a voice interface to 'talk to' sites with and navigate the browser, which sends commands to the browser agent. This browser agent creates a comprehensive knowledge base from each and every site using different segmentation and vision models, providing a deep understanding of what elements can and should be interacted with. This allows us to compile the site down to the core needs of the user and give the user information about the next steps to take while navigating. ## Challenges we ran into Traditional large language and multi-modal models simply didn't give us anywhere near the results we wanted, they were much too generalized and inaccurate for our use. Our biggest challenges lied with both sourcing and fine tuning different models, some of which worked, some of which did not. This was an incredibly time consuming process, and for quite a while we were unsure that this idea would even be able to be executed with the time and resources we had. We had to take quite aggressive approaches with blending different techniques to get the results we wanted. ## Accomplishments that we're proud of Making it work was definitely the best part of our weekend! The first automated browser session we had was truly a breath of fresh air to show us that idea was at the very least, somewhat valid and possible by the end of the hackathon. ## What we learned This was definitely a great experience to try out a ton of different ML models and blend these with traditional scraping & crawling techniques to not only quickly -- but even more accurately get the results we wanted. ## What's next for SpeakEasy The fact that this can be done should inspire a lot of people! We live in a world where we can make truly revolutionary and applicable projects that could genuinely benefit people, in just 36 hours! We'd love for you to star and try out the repo for yourself, there's detailed instructions in running the project in the README.
## Inspiration As our lives continue, it is apparent that communication and presentation skills are required and valued, starting from a young age at school, to up until after graduation, hopping from interview to interview. Thus, our current reality of the need for certain abilities to succeed inspired us to develop a program that would support and aid people, ranging from students to adults, in the development and refinement of this necessary ability. ## What it does Speakrly is an online app that improves the user's presentation skills. Through the many frameworks and technologies, it is able to analyze the speaker's presentation, in respect of its context, and the user's hand and eye movement. Through the examination, it is able to adequately assess the user based on these three areas and further produce feedback if needed. ## How we built it On VSCode, with the help of GitHub, Router, and React, we were able to develop this beneficial program. Through its development, we used Cohere Rerank (in order to identify that all main points of a speech have been addressed) and OpenAI Whisper with Python for the speech recognition aspects and material UI for the front-end visuals. ## Challenges we ran into Throughout this year's Hack the North, we encountered many challenges. Similar to many others, we were looking forward to using the AdHawk glasses. However, due to the low quantity available, we ended up empty-handed. Furthermore, the time span for the Hackathon added further pressure to accomplish something great at the end for the judges to witness. ## Accomplishments that we're proud of Although we faced many obstacles, we were still able to mitigate them. Some of the problems we met left us will fewer resources than expected. But through other aspects, we were able to overcome these complications. Such as the situation with the AdHawk glasses. We were still able to continue our eye-tracking aspect using a different application. ## What we learned After this experience, we will leave the hackathon with more knowledge than we had when we first entered it. Throughout the hackathon, we were met with many challenges. But with our resilience and perseverance, we were able to adapt and modify ourselves to meet the needs of these aspects of the program. With each complication was a possibility for new knowledge. Thus, through the 36 hours at Hack the North, we were able to learn a lot. ## What's next for Speakrly We at Speakrly hope to continue to help the community through any programming aspects. In the future, we hope to expand more into hardware to further help future generations at improving their skills and abilities when they enter the real world.
## Inspiration Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings. ## What it does Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language. ## How we built it ### Visual Studio Code/Front End Development: Sovannratana Khek Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality. ### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way. In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once. ### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website. ### Solidworks/Product Design Engineering: Riki Osako Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging. ### Figma/UI Design of the Product: Riki Osako Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end. ## Challenges we ran into Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking. Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework. Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency. Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german ## Accomplishments that we're proud of Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short. ## What we learned As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well. ## What's next for Untitled We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days. From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures. We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with. From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
losing
## Inspiration The inspiration came from a scene in a TV show where the hospital had a sensor on people which they tracked and if the sensor detected an emergency, the person would be called and after confirming that there was indeed an emergency, 911 would be called. ## What it does Our idea is to build a medical information ‘sharing’ system. First the Fitbit or an alike wearable sensor would collect the health data [including heart rate, etc.] and send it to the cloud. Inside the cloud, other medical information such as prescription and family medical history would also be stored. Next, we would retrieve the information from the cloud using the IoT starter kit from Telus (This would be done on the doctor’s end). Before allowing data retrieval, we would need the patient’s fingerprint and the information linked to that fingerprint would be visible to the doctor through a computer application. ## How I built it We started by dividing tasks. One of us worked on the fingerprint sensor and trying to get it to sense the correct fingerprint. Another person worked on setting up the Fitbit, and retrieving and sending data to the cloud. Two of us worked on setting up the Telus IoT starter board to receive data (pressure and temperature). We ran into many challenges with the Fitbit and fingerprint sensor [Please see **Challenges I ran into** Section]. After finding away around those challenges, one of us worked on the keypad to retrieve the data, one of us worked on the python code to retrieve the data from the cloud, and two of us continued to work on trying to connect the Arduino to the Telus IoT board. ## Challenges I ran into While building this idea, we ran into a couple of problems and have moved on to plan B or C. The fitbit, in order to send data information, we would have to connect to wifi. We, as well as many other people on the internet, are unable to connect to the wifi, and thus unable to access the data. As plan B, we requested to get our personal data directly from fitbit and requested access, however, the automated response was that we would not get our data for another 3 days. As plan C, we created simulated data from a fitbit to validate the design, since data retrieval from the fitbit is a product design, and thus requires no change from us (we are only accessing this as an input). Solving that side of the problem, we encountered another problem which was that the fingerprint sensor was not working. Initially, we thought it was because our code was not working properly or a poor connection, however, upon further testing (ie. taking of the electrical tape to check the connections, etc.), we found that the D- wire was broken and there was no way of soldering it back on. After borrowing and replacing it with a Qwiic wire, we tested the code again, however, with more testing we were still unable to recognize the sensor, so we reached the conclusion that the sensor itself is broken. Thus as plan B, we have decided to use a keypad (made from buttons) instead. Now, our final product contains a list of data, which we store into the cloud. Then using the IoT starter kit, we retrieve that data according to the number inputted on the keypad. ## Accomplishments that I'm proud of Some of our major accomplishments was setting up the starter kit as it was something that we initially had no idea how to do and now have it set up and able to retrieve data. ## What I learned We have learned to use the IoT starter kit in connecting to the cloud and retrieving its data. We have also learned to find ways around hardware problems including the Fitbit and the fingerprint sensor. ## What's next for Medical Info Cloud We think the Medical info cloud is a great idea and if implemented can make transferring information at the doctors office more efficient and they would have more access to every day data of the patient. We would like next time to use a camera with facial recognition instead next time as a verification for retrieving the data and try to find a way around using a fitbit as the medical sensor. Potential alternative can include sensors in our phone and then transferring those data to the cloud.
We're fortunate enough to have access to medication but in developing countries, not everyone has the same privilege that we do. Two of the four members of our group have physical medical conditions that they must take medication for. One of our members has the most common congenital heart disease, Tetrology of Fallot, and the other has Type 1 Diabetes. We want to make medication more accessible for those dealing with these common conditions and for the people with countless other needs that require medication. Our project uses facial recognition to dispense medication. We began by programming our GUI for the dispenser, setting up Azure, and setting up the webcam. We had one member buy parts for the prototype and then he spent the next day building it. We started working on the facial recognition, merging the webcam, GUI, and the facial recognition into one program. We spent the last couple hours setting up the motor controllers to finalize the project. We ran into many challenges along the way. Our Telus LTE-M IoT Starter Kit was incompatible with Azure, so after many hours of attempting to make it work, we had to give up and find another way to store the facial recognition data. We started making the GUI in Python with Tkinter but after a couple hours, decided to use another module as it would be easier and look more visually appealing. Our team is very proud that we were able to complete our facial recognition program and prototype for the dispenser. We learned how to use the Raspberry Pi, Arduino, Stepper controllers, breadboards. We set up a virtual machine, an IoT hub, and image processing. In the future we hope to polish our prototype and actually use Azure in the program.
## 💫 Inspiration Love online shopping, but tired of the deliveries arriving when you're not home? Or perhaps how you're continuously checking when you're package arrives, only to delay your plans out of worry? Lovers of Online shopping and E-commerce. We present to you, **pacs**. ## 😮 What **pacs** does **pacs** is a unique storage solution with plans to open multiple facilities that have lockers of all shapes and sizes. Kind of like a community mailbox. As a shopper, you won’t have to put in your address. With our app, you’ll be able to see all your purchases using **pacs** in one convenient place where you can find your respective locker number and key code. ## 🔨 How we built it We’re always trying to learn something new! For this app, we implemented an MVC (Model, View, Controller) application structure. For the view, we’ve decided to use React that communicates with a controller using Node.js and Express.js, which interacts with a model which is ProtoBuf for this web application. With this model, everything has been modularized and we’ve found it very easy to create endpoints and implement functionality with great ease. ## 😰 Challenges we ran into * A major issue was to properly implement Protobuf particularly with an M1 Mac * Some minor design challenges * Furthermore, as the app scaled in the prototype, developing the front end React view was becoming increasingly heavier. ## 😤 Accomplishments that we're proud of For each team member this was somewhat different! Rithik and Saqif researched and learned regarding protobuf and its syntax, so it was a wonderful learning experience. As for Shaiza she learned how to properly use ES6 and successfully parse data from JSON files! Despite the insane time crunch, we thoroughly enjoyed this experience. ## 🧠 What we learned We learned… * How to scale our idea for the prototype * New tools and techniques * How full-stack applications work * An approach to modularize solutions ## 💙 What's next for **pacs** Because there are so many moving parts in our application, there are several microservices that would require intercommunication. We would be working on the different microservices and ensuring that protobuf works efficiently between them. For example, we may have an internal server that needs to fetch and store the utilization status of the many locker facilities. A location of 30 lockers may have 24 occupied and that information has to be relayed to the user. There may be other server applications that are dedicated for businesses like API testing and communication which will also require microservices to generate unique IDs, access certain user information and more. We would be constructing these microservices and rigorously test the interactions. We would also migrate the services into the cloud. In addition, we can add important features such as Twilio, in order to incorporate SMS notifications, which we got to learn.
losing
## Inspiration In response to recent tragic events in Turkey, where the rescue efforts for the Earthquake have been very difficult. We decided to use Qualcomm’s hardware development kit to create an application for survivors of natural disasters like earthquakes to send out distress signals to local authorities. ## What it does Our app aids disaster survivors by sending a distress signal with location & photo, chatbot updates on rescue efforts & triggering actuators on an Arduino, which helps rescuers find survivors. ## How we built it We built it using Qualcomm’s hardware development kit and Arduino Due as well as many APIs to help assist our project goals. ## Challenges we ran into We faced many challenges as we programmed the android application. Kotlin is a new language to us, so we had to spend a lot of time reading documentation and understanding the implementations. Debugging was also challenging as we faced errors that we were not familiar with. Ultimately, we used online forums like Stack Overflow to guide us through the project. ## Accomplishments that we're proud of The ability to develop a Kotlin App without any previous experience in Kotlin. Using APIs such as OPENAI-GPT3 to provide a useful and working chatbox. ## What we learned How to work as a team and work in separate subteams to integrate software and hardware together. Incorporating iterative workflow. ## What's next for ShakeSafe Continuing to add more sensors, developing better search and rescue algorithms (i.e. travelling salesman problem, maybe using Dijkstra's Algorithm)
## Inspiration This project was a response to the events that occurred during Hurricane Harvey in Houston last year, wildfires in California, and the events that occurred during the monsoon in India this past year. 911 call centers are extremely inefficient in providing actual aid to people due to the unreliability of tracking cell phones. We are also informing people of the risk factors in certain areas so that they will be more knowledgeable when making decisions for travel, their futures, and taking preventative measures. ## What it does Supermaritan provides a platform for people who are in danger and affected by disasters to send out "distress signals" specifying how severe their damage is and the specific type of issue they have. We store their location in a database and present it live on react-native-map API. This allows local authorities to easily locate people, evaluate how badly they need help, and decide what type of help they need. Dispatchers will thus be able to quickly and efficiently aid victims. More importantly, the live map feature allows local users to see live incidents on their map and gives them the ability to help out if possible, allowing for greater interaction within a community. Once a victim has been successfully aided, they will have the option to resolve their issue and store it in our database to aid our analytics. Using information from previous disaster incidents, we can also provide information about the safety of certain areas. Taking the previous incidents within a certain range of latitudinal and longitudinal coordinates, we can calculate what type of incident (whether it be floods, earthquakes, fire, injuries, etc.) is most common in the area. Additionally, by taking a weighted average based on the severity of previous resolved incidents of all types, we can generate a risk factor that provides a way to gauge how safe the range a user is in based off the most dangerous range within our database. ## How we built it We used react-native, MongoDB, Javascript, NodeJS, and the Google Cloud Platform, and various open source libraries to help build our hack. ## Challenges we ran into Ejecting react-native from Expo took a very long time and prevented one of the members in our group who was working on the client-side of our app from working. This led to us having a lot more work for us to divide amongst ourselves once it finally ejected. Getting acquainted with react-native in general was difficult. It was fairly new to all of us and some of the libraries we used did not have documentation, which required us to learn from their source code. ## Accomplishments that we're proud of Implementing the Heat Map analytics feature was something we are happy we were able to do because it is a nice way of presenting the information regarding disaster incidents and alerting samaritans and authorities. We were also proud that we were able to navigate and interpret new APIs to fit the purposes of our app. Generating successful scripts to test our app and debug any issues was also something we were proud of and that helped us get past many challenges. ## What we learned We learned that while some frameworks have their advantages (for example, React can create projects at a fast pace using built-in components), many times, they have glaring drawbacks and limitations which may make another, more 'complicated' framework, a better choice in the long run. ## What's next for Supermaritan In the future, we hope to provide more metrics and analytics regarding safety and disaster issues for certain areas. Showing disaster trends overtime and displaying risk factors for each individual incident type is something we definitely are going to do in the future.
## Inspiration How many times have you been in a lecture in person or online and wished that you could discuss the current topic with someone since you're having a hard time understanding in real-time? How many times have you taken a picture of the whiteboard to look at it later but never ended up studying from it? What about the volunteer note-takers in every course for accessibility? We wanted to create an app that would fix all of these problems and more by allowing more collaboration and connection during lectures and more aid with note taking. ## What it does It is a social media study app, where you can choose courses and for each lecture, there is a channel where students can talk to each other. Furthermore, each lecture has a notes section where they can send pictures of notes to that section and it will transcribe it which reduces the need for volunteer notetakers and helps people who feel like it's tough to follow the lecture in real time while taking notes. ## How we built it We decided to use a MERN stack as it's simple and reliable for storing, processing, retrieving and displaying information. First, we created a basic entity relation of the data that we would be working with [ER](https://res.cloudinary.com/dgmplm2mm/image/upload/v1709472777/wdkyeqwvpru0nuhb9emu.png). We delegated tasks among ourselves to focus on design, routes and the front-end. [figma](https://www.figma.com/file/nx3HUKc4jtir3QrKJ4fSTN/uottahack?type=design&node-id=0%3A1&mode=design&t=hkRPnNNAdy5mNkti-1). We eventually started designing the REST requests necessary for the front-end [Ideas][<https://res.cloudinary.com/dgmplm2mm/image/upload/v1709472981/jlxo0iwfzqarwyue5azm.jpg>]. We used a Flask server to serve as an endpoint for the application to do gpt prompts. We also used Cloudinary to store pictures as MongoDB is not great at storing pictures. We deployed the flask and the express server on an ec2 instance so that the website once hosted on vercel could use the APIs. ## Challenges we ran into 1. Storing user images in a way that wasn't slow. At first we wanted to store the images using base64 URIs that would represent the binary data of the image itself. This proved to be quite slow when sending and receiving requests, and it quickly bloated the mongodb cluster. To avoid this, we decided to store the images in a cloud platform and store the urls to those images in the mongodb cluster. 2. Setting up the domain There was not much documentation for working with the website for creating the domain name. In addition, as it was our first time working with a domain name, we had a lot of difficulty working with it but in the end it worked out fine. 3. Setting up the EC2 instance Although it was easy to setup initially, for some reason while the requests to the ec2 worked well with postman, our app when using the exact same requests were not able to get the responses. ## Accomplishments that we're proud of We are proud to have completed the project. We are proud to have setup our app on the domain name. We are proud to have designed and set things up from the start to help us succeed! ## What we learned We learned about the importance of designing entities and thinking about rest connections from the start. We learned about the importance of designing front end look from beforehand. We learned how to use Cloudinary in MERN applications to hold images. We learned how domains worked. ## What's next for Acadameet In the future, we want to add many more of the features we wanted to add, such as group creation helpers, group assignment meeting automations and various other features that would make studying or doing group work much easier. We believe that this is important for the future of study
winning
## Inspiration Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants. We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us. The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term. ## What it does We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend. ## How we built it We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud. ## Challenges we ran into This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering. ## Accomplishments that we're proud of We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission. ## What we learned We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp. ## What's next for Purrtector Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet. ## Domain.com Link <https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
## Inspiration 🌎 On Average, each person produces 4.7 tonnes of carbon dioxide each year worldwide. This is roughly the size of an adult African elephant in just CO₂ gas. Carbon dioxide is the main greenhouse gas which causes global warming. Since 1961 humanity’s carbon footprint has increased over eleven times. This Hack The Valley our team wanted to innovate saving the environment using AI. ## What it does 🤔 Sustainify is a web app where users can track their carbon footprint usage and gain meaningful feedback on their emission habits using AI. The user is prompted to complete a short survey, the survey consists of questions related to the user's household, transportation, and consumption practices. Once met the results get processed into our language model to give intelligent recommendations on how the user can improve on reducing emissions. Sustainify also provides insights into how your habits compare to those of other users, showcasing the distinctions in your data. If your habits change you can retake the survey to get a new rating. But wait that's not all, with Sustainify we incorporate our very own chatbot, Eco! Eco is our cute and friendly mascot who will answer any of your questions related to the environment. For example, we can ask Eco “How can I reduce my carbon emissions if I am a frequent automobile driver?”. Then Eco will provide a variety of solutions to your problem. Be aware though that Eco will only answer questions related to sustainability. So if we ask Eco, “Who is the president of the United States?”, Eco will not answer us. ## How we built it 🛠️ Sustainify was built with Flask and OpenAI API for the back-end, React, Tailwind CSS, FireBase, and JavaScript, CSS/HTML for the front end, we used FireStore for the database, and we hosted our web app on sustainifytheworld.tech. ## Challenges we ran into ⚠️ One of the biggest challenges we ran into was trying to get our AI's responses to connect to the front end so that the users could see the response. After a few hours, we were able to resolve the issue. ## Accomplishments that we're proud of 🏆 We're really proud of integrating AI into our project. No one on our team previously knew how to work with AI but in the short span of 36 hours, we were able to create our own chatbot and use AI to give smart recommendations to users. ## What we learned 🧠 • How to use Open AI API • How to host a website using .tech domains • How to use Flask ## What's next for Sustainify ➡️❓ Our next steps for Sustainify is to turn it into a game and make saving the environment fun. The less you emit the more points you get and you can compete against your family and friends.
## Inspiration We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space. ## What it does Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations. ## How we built it Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap. ## Challenges I ran into * Deployment * Categorization of food items using Google API * Setting up Dev. Environment for a brand new laptop * Selecting appropriate backend framework * Parsing image files using React * UI designing using Reactstrap ## Accomplishments that I'm proud of * WE MADE IT! We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment. ## What I learned * UI is difficult * Picking a good tech stack is important * Good version control practices is crucial ## What's next for Recycle.space Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
partial
## Inspiration The summer before college started, all of us decided to take trips to different parts of the world to live our last few days before moving out to the fullest. During these trips, we all figured that there were so many logistics and little things that we needed to worried about in order to have a successful trip, and it created much unneeded stress for all of us. We then came up with the idea to create an app that would streamline the process of planning trips and creating itineraries through simple and convenient features. ## What it does It is a travel companion app with functionality such as itinerary generation, nearby search, translation, and currency exchange. The itinerary generator will weigh preferences given by the user, and generate a list of points of interest accordingly. The translator will allow the user to type any phrase in their preferred starting language and output in the language of the country they are in. The user is also allowed to save phrases whenever they would like for quick access. Finally, the currency exchange allows the user to see the exchange rate from their currency to the currency of whichever country they are in, and they are also able to convert between the two currencies. ## How we built it We built the front-end using Android Studio. We built the back-end using StdLib, which also made use of other APIs including Google Places, Google Places Photos, Countries API, Fixer.io and Google Translate. The front end utilizes the HERE.com Android SDK to get the location of our device from GPS coordinates. ## Challenges we ran into We were all relatively inexperienced with Android Studio, and thus we spent a lot of time figuring out how to use it but we eventually managed to figure out its ins and outs. There was also an issue with Standard Library and compiling one of our dependencies to work. ## Accomplishments that we're proud of We are proud of creating a functional app that is on the verge of being a super powerful traveling tool for people to use when seeing the world. We're also proud of aggregating all the APIs needed to make this hack possible as well as synthesizing all of them within Android Studio. ## What we learned We definitely learned a lot more about utilizing Android Studio, since our hack mostly revolved around its use. Increased experience in Java, including managing asynchronous calls and interactions with the internet were among some of the most valuable lessons. ## What's next for Wanderful Better design is certainly a priority, however functionally the app can be improved within each aspect, such as allowing the user to generate their own itinerary entries, tailoring nearby search to user specifics, increased translating capability, and increased personalization for each user.
## Inspiration Since the beginning of the hackathon, all of us were interested in building something related to helping the community. Initially we began with the idea of a trash bot, but quickly realized the scope of the project would make it unrealistic. We eventually decided to work on a project that would help ease the burden on both the teachers and students through technologies that not only make learning new things easier and more approachable, but also giving teachers more opportunities to interact and learn about their students. ## What it does We built a Google Action that gives Google Assistant the ability to help the user learn a new language by quizzing the user on words of several languages, including Spanish and Mandarin. In addition to the Google Action, we also built a very PRETTY user interface that allows a user to add new words to the teacher's dictionary. ## How we built it The Google Action was built using the Google DialogFlow Console. We designed a number of intents for the Action and implemented robust server code in Node.js and a Firebase database to control the behavior of Google Assistant. The PRETTY user interface to insert new words into the dictionary was built using React.js along with the same Firebase database. ## Challenges we ran into We initially wanted to implement this project by using both Android Things and a Google Home. The Google Home would control verbal interaction and the Android Things screen would display visual information, helping with the user's experience. However, we had difficulty with both components, and we eventually decided to focus more on improving the user's experience through the Google Assistant itself rather than through external hardware. We also wanted to interface with Android things display to show words on screen, to strengthen the ability to read and write. An interface is easy to code, but a PRETTY interface is not. ## Accomplishments that we're proud of None of the members of our group were at all familiar with any natural language parsing, interactive project. Yet, despite all the early and late bumps in the road, we were still able to create a robust, interactive, and useful piece of software. We all second guessed our ability to accomplish this project several times through this process, but we persevered and built something we're all proud of. And did we mention again that our interface is PRETTY and approachable? Yes, we are THAT proud of our interface. ## What we learned None of the members of our group were familiar with any aspects of this project. As a result, we all learned a substantial amount about natural language processing, serverless code, non-relational databases, JavaScript, Android Studio, and much more. This experience gave us exposure to a number of technologies we would've never seen otherwise, and we are all more capable because of it. ## What's next for Language Teacher We have a number of ideas for improving and extending Language Teacher. We would like to make the conversational aspect of Language Teacher more natural. We would also like to have the capability to adjust the Action's behavior based on the student's level. Additionally, we would like to implement a visual interface that we were unable to implement with Android Things. Most importantly, an analyst of students performance and responses to better help teachers learn about the level of their students and how best to help them.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
## Inspiration During the second wave of covid in India, I witnessed the unthinkable amount of suffering unleashed on the people. During the peak, there were around 400,000 positive cases per day that were reported, according to various media reports a significant amount of positive cases went unreported. Images of piles of dead bodies being cremated almost everywhere around the country left me in shock. Since then I wanted to make/contribute to helping as much as I could. I took this problem as an inspiration to make an effort. Silent Hypoxia is when a patient does not feel shortness of breath, yet their oxygen level drops drastically, this is a very dangerous situation that has claimed many lives in this pandemic. To detect Silent Hypoxia, continuous monitoring of a patient's oxygen saturation is needed, unfortunately, general oximeters available in the market are manual and must be used at frequent intervals. This is a big problem, for one, due to the extreme shortage of healthcare workers particularly in India, individual attention to patients for measuring SPO2 every few minutes is impossible, which increases the chances of Silent Hypoxia going undetected. The solution is, continuous monitoring of oxygen saturation, this feature, unfortunately, is not offered by common affordable oximeters, taking it as a challenge, I came up with a prototype solution. When a person has advanced age, they are likely to experience a decrease in physical quality, one of the weaknesses (physical decline) experienced by the elderly is a weakness in their legs, which will make them more susceptible to fall. Falling is an event that causes a conscious subject to be on the ground unintentionally. Factors that cause falls are ill-informed like stroke, slippery or wet floors, holding places that are not strong or not easily held. These days’ falls have become a major health problem, particularly in the old aged ones. According to the statistics of WHO, 646,000 fatal falls are being recorded and 37.3 million falls that are not so fatal but which needs medical treatment have occurred existing solutions include computer vision to detect if the person falls, this process is highly susceptible to lighting conditions and is very restricted when it comes to covering a wide area. For example, a camera cannot detect fall in the bathroom because there is usually no camera ## What it does It also solves another problem, that is of network and communications, to explain, imagine there is a patient wearing the device, which uses wifi to connect to the internet and send data to dynamodb. But if the patient goes to the bathroom, for example, the wifi connection might get attenuated due to walls and physical obstructions, another situation, in developing and undeveloped countries wifi is still a luxury and very uncommon so due to these real-world conditions, depending on just wifi and Bluetooth like most smartwatches and fitness wearables do, is a bad idea and not reliable, for this reason, oxy, along with wifi also has a GSM module that connects to the internet via GPRS, the GPRS network is available almost everywhere on earth, vastly improving reliability. ## How I built it The device continuously monitors data from the SPO2 Sensor and Inertial Measurement Unit, and sends the data to dynamo db through an API gateway and lambda function, it can either use wifi or GPRS to connect to the API, the only difference between wifi and gprs is, gprs uses AT commands to connect to an intermediate gateway because the module i had at hand does not support SSL. so Once the device detects oxygen levels dropping below a certain point or physical fall, Smartphone app sends a notification, so if a patient needs 24/7 monitoring of SPO2 levels, you don’t have to take out an oximeter and measure manually every five minutes which can be exhausting for patient and caretaker, also, in India and other similar countries, there was an extreme shortage of healthcare workers who can be physically present nearby patient all the time to measure the oxygen levels, so, through the web app, which is hosted on the Graviton EC2 instance, they can add as many devices they want to monitor remotely, and medical history for emergency purpose of every patient is one click away, this, can allow them to keep monitoring patients’ spo2 while they tend to other important tasks. The parameters of notification on the app are customizable, you can adjust the time intervals and threshold values to trigger notifications. The device can be powered through a battery or USB, with the microcontroller esp8266 being the brain. The device can use inbuilt wifi to connect to the internet or it can do it through GPRS and SIM800L module, it also features onboard battery charging and discharging, with overcharge and overcurrent protection. And measurement is taken through an SPO2 sensor by Melexis. The cost of making the device prototype was around 9 USD, if mass-produced the price can come down significantly. ## Challenges I ran into The biggest challenge was to get data from the SPO2 sensor MAX30100, although there are libraries available for it, the bad schematic design of the breakout board made it impossible to get any data. I had to physically tinker around with the tiny SMD resistors on the sensor to make sure the I2C lines of the sensor work on a logic level of 3.3V. ## Accomplishments that I'm proud of For me, the proudest accomplishment is to have a working prototype of not only hardware but software too. ## What I learned The most important skill I learned is to connect the microcontroller to AWS DynamoDB through Lambda Gateway, and also how not to burn your fingers while desoldering teeny-tiny SMD components, ouch! that hurt 😂. ## What's next for oxy The hardware enclosure that houses the device is must be made clamp-like or strap-on, to make it a proper wearable device, I wanted to do it right now but I lost time trying to implement the device and app.
## Inspiration Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision. ## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> ) The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices. The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side). If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop. An RGB LED is added for our debugging of the prototype: RED - Shoe is moving - In the middle of a step GREEN - Shoe is at bottom of step and sees an obstacle BLUE - Shoe is at bottom of step and sees no obstacles While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration. ## How we built it Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!) I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom. All electronics were custom. Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system. Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging. . After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user. ## Challenges we ran into The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape. ## Accomplishments that we're proud of We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day. ## What we learned Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness. ## What's next for Smart Shoe Module Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions. Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types. We would also look forward to making a more professional demo video Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
## Inspiration About one third of the elder population over the age of 65 falls each year, and the risk of falls increases proportionately with age. But these statistics fall short of the actual numbers since many incidents are unreported by seniors and unrecognized by their caregivers. Falls are the leading cause of death due to injury among the elderly. 87% of all fractures in the elderly are due to falls. Falls account for 25% of all hospital admissions, and 40% of all nursing home admissions 40% of those admitted do not return to independent living; 25% die within a year. FallTrack helps to solve this problem and take a better care of your elderly loved ones. ## What it does FallTrack accurately detects when the person falls using accelerometer. Its algorithm helps to distinguish falls from other physically similar activities. ## How we built it We built it using Arduino and iOS app. ## Challenges we ran into We had problems with the microcontroller and had to fix false positives. Due to lack of some equipments, we had to improvise with what was available at the time. ## Accomplishments that we're proud of We are proud of building a fully functioning wearable from the scratch. ## What we learned We learned to debug the hardware and utilize available resources. ## What's next for FallTrack We hope FallTrack can help to reduce the fall rate among elderly population in the future.
partial
## Inspiration At many public places, recycling is rarely a priority. Recyclables are disposed of incorrectly and thrown out like garbage. Even here at QHacks2017, we found lots of paper and cans in the [garbage](http://i.imgur.com/0CpEUtd.jpg). ## What it does The Green Waste Bin is a waste bin that can sort the items that it is given. The current of the version of the bin can categorize the waste as garbage, plastics, or paper. ## How we built it The physical parts of the waste bin are the Lego, 2 stepper motors, a raspberry pi, and a webcam. The software of the Green Waste Bin was entirely python. The web app was done in html and javascript. ## How it works When garbage is placed on the bin, a picture of it is taken by the web cam. The picture is then sent to Indico and labeled based on a collection that we trained. The raspberry pi then controls the stepper motors to drop the garbage in the right spot. All of the images that were taken are stored in AWS buckets and displayed on a web app. On the web app, images can be relabeled and the Indico collection is retrained. ## Challenges we ran into AWS was a new experience and any mistakes were made. There were some challenges with adjusting hardware to the optimal positions. ## Accomplishments that we're proud of Able to implement machine learning and using the Indico api Able to implement AWS ## What we learned Indico - never done machine learning before AWS ## What's next for Green Waste Bin Bringing the project to a larger scale and handling more garbage at a time.
**Inspiration** Inspiration came from the growing concern about environmental sustainability and waste management. We were particularly struck by the amount of waste generated by single-use packaging and products from large corporations like PepsiCo. We wanted to find a way to inspire people to think creatively about trash, turning it into something valuable rather than discarding it. The idea of upcycling aligns perfectly with this goal, turning what would be waste into treasures with a new purpose. **What it does** TrashToTreasure is a sustainable showcase platform where users can upload pictures of PepsiCo packaging waste and receive creative upcycling ideas from our AI assistant, Gemini. After transforming the waste into something useful or artistic, users can upload pictures of their upcycled products. The community can then vote on the best upcycling transformations, encouraging both creativity and sustainable practices. **How we built it** We built TrashToTreasure using a combination of modern web technologies. The frontend is built with React, and we used Next.js for server-side rendering. We integrated Gemini, an AI assistant, to process user-submitted images and suggest upcycling ideas. The backend is powered by a NodeJS application with a MongoDB database, where we store user data, images, and voting information. We also incorporated voting functionality to allow users to rate upcycled creations. Images are processed using a combination of cloud storage and base64 encoding to facilitate smooth uploads and rendering. **Challenges we ran into** One of the biggest challenges was developing a seamless image processing pipeline, where users can upload pictures of their trash and upcycled items without losing quality or causing performance issues. Another challenge was training Gemini to suggest useful and creative upcycling ideas based on a wide range of packaging materials. Prompting was an important must here! **Accomplishments that we're proud of** For all but one, it's our first hackathon! We’re proud of creating a platform that promotes environmental sustainability and community engagement. We successfully integrated an AI that can provide,creative upcycling ideas, encouraging people to think outside the box when it comes to waste. Additionally, building a voting mechanism that allows the community to participate and celebrate the best ideas is something we’re really love. **What we learned** We learned the importance of seamless user experience, especially when dealing with images and voting mechanisms. Things can get complicated, and that's when users start leaving your page. We also deepened our understanding of AI-driven suggestion systems and how to fine-tune them for specific use cases like upcycling. Working with sustainability-focused applications taught us how small actions, like reimagining trash, can have a larger impact on both individuals and the environment. What's next for TrashToTreasure - Sustainable Showcase Moving forward, we want to expand the platform by integrating with other companies' packaging waste, not just PepsiCo’s. We plan to enhance Gemini's AI capabilities to suggest even more creative ideas and work on building a larger community where people can share, learn, and inspire others to upcycle. We’re also considering introducing challenges or competitions where users can win rewards for the most innovative upcycled creations, further incentivizing sustainability. Maybe a Pepsi Jet!
## Inspiration The inspiration for this project came during a chaotic pre-hackathon week, when two of us missed an important assignment deadline. Despite having taken pictures of the syllabus on a whiteboard, we simply forgot to transfer the dates to our calendars manually. We realized that we weren't alone in this – many people take pictures of posters, whiteboards, and syllabi, but never get around to manually adding each event or deadline to their calendar. This sparked the idea: what if we could automate the process of turning images, text, and even speech into a calendar that’s ready to use? ## What it does Agenda automatically generates calendar events from various types of input. Whether you upload a photo of a poster, screenshot a syllabus, or take a picture of a whiteboard, the system parses the image and converts the text into a .ical file that can be imported into any calendar app. You can also type your input directly or speak to the system to request a schedule, itinerary, or event of any kind. Need to create a workout routine? Ask Agenda. Planning a day of tourism? Agenda can build a personalized itinerary. It’s a tool designed to save you time by turning your notes, screenshots, photos, and thoughts into structured calendar events instantly. ## How we built it We developed a Flask-based web frontend that allows users to upload an image, take a picture with their webcam, or directly type to the system. For speech transcription, we use Whisper through OpenAI's speech-to-text API. The image processing is powered by GPT-4o's vision capabilities, which recognize the text and identify the activity. The text is then transformed into a structured .ical format using GPT4o’s natural language processing and function-calling capabilities. The output .ical file can be instantly imported into any calendar app without any hassle. ## Challenges we ran into One major challenge was dealing with large base64-encoded images. Sometimes the image requests would break due to size limits when transmitted to the vision API. It took care to architect the way we handle image and audio uploads for compatibility. Additionally, it took time to get GPT4o’s function calling and text-to-calendar processing working consistently, especially when handling diverse inputs like itineraries, academic deadlines, personal schedules, and event flyers on the street. ## Accomplishments that we're proud of We’re proud of how intuitive and versatile the system turned out to be. It successfully parses input from images, speech, and text, creating usable .ical files. It can handle a wide variety of use cases, from converting a physical agenda into a digital calendar, to building custom itineraries for travel or fitness. Seeing this system make sense of such diverse data inputs is a real win for us. ## What we learned We learned a great deal about handling different forms of input—especially speech and images—and converting them into structured data that people can use in a practical way. Optimizing requests and ensuring the system could handle large inputs efficiently taught us valuable lessons about image processing and API limitations. Additionally, working with function-calling in GPT4o showed us how to dynamically convert parsed text into meaningful and useful data. ## What's next for Agenda Next up, we plan to complete the integration of speech input, allowing users to talk to the system for hands-free scheduling. We also want to extend the platform’s capabilities by adding support for recurring events, better handling of multi-page documents like syllabi, and the ability to sync directly with calendar apps in real-time. We see endless possibilities for expanding the way users can interact with their schedules, making the entire process even more seamless and automatic.
winning
## What it is 🕵️ MemoryLane is a unique and innovative mobile application designed to help users capture and relive their precious moments. Instead of being the one to curate an image for others, with a nostalgic touch, the app provides a personalized space for friends to document and remember shared memories ~ creating a digital journey through their life experiences. Whether it's a cherished photo, an audio clip, a video or a special note, MemoryLane allows users to curate their memories in a visually appealing and organized manner. With its user-friendly interface and customizable features, MemoryLane aims to be a go-to platform for individuals seeking to celebrate, reflect upon, and share the meaningful moments that shape their lives. ## Inspiration ✨ The inspiration behind MemoryLane was born from a recognition of the impact that modern social media can have on our lives. While social platforms offer a convenient way to connect with others, they often come with the side effect of overwhelming timelines, constant notifications, and FOMO. In an age where online interactions can sometimes feel fleeting and disconnected, MemoryLane seeks to offer a refuge—a space where users can curate and cherish their memories without the distractions of mainstream social media. The platform encourages users to engage in a more mindful reflection of their life experiences, fostering a sense of nostalgia and a deeper connection to the moments that matter. ## What it does 💻 **Home:** * The Home section serves as the main dashboard where users can scroll through a personalized feed of their memories. This is displayed in chronological order of memories that haven't been viewed yet. * It fetches and displays user-specific content, including photos, notes, and significant events, organized chronologically. **Archive:** * The Archive section provides users with a comprehensive repository of all their previously viewed memories * It implements data retrieval mechanisms to fetch and display archived content in a structured and easily accessible format * [stretch goal] include features such as search functionality and filtering options to enhance the user's ability to navigate through their extensive archive **Create Memory:** * The core feature of MemoryLane enables users to add new memories to share with other users * Includes multi-media support **Friends:** * The Friends section focuses on social interactions, allowing users to connect with friends * Unlike other social media, we do not support likes, comments or sharing in hopes of being motivation to reach out to the friend who shared a memory on other platforms **Settings:** * Incorporates user preferences, allowing adjustments to account settings including a filter for memories to be shared, incorporating Cohere's LLM to ensure topics marked as sensitive or toxic are not shown on Home feed ## How we built it 🔨 **Frontend:** React Native (We learned it during the hackathon!) **Backend:** Node.js, AWS, Postgres In this project, we utilized React Native for the frontend, embracing the opportunity to learn and apply it during the hackathon. On the backend, we employed Node.js, leveraged the power of AWS services (S3, AWS-RDS (postgres), AWS-SNS, EventBridge Scheduler). Our AWS solution comprised the following key use-cases: * AWS S3 (Simple Storage Service): Used to store and manage static assets, providing a reliable and scalable solution for handling images, videos, and other media assets in our application. * AWS-RDS (Relational Database Service): Used to maintain a scalable and highly available postgres database backend. * AWS-SNS (Amazon Simple Notification Service): Played a crucial role in enabling push notifications, allowing us to keep users informed and engaged with timely updates. * AWS EventBridge Scheduler: Used to automate scheduled tasks and events within our application. This included managing background processes, triggering notifications, and ensuring seamless execution of time-sensitive operations, such as sending memories. ## Challenges we ran into ⚠️ * Finding and cleaning data set, and using Cohere API * AWS connectivity + One significant challenge stemmed from configuring the AWS PostgreSQL database for optimal compatibility with Sequelize. Navigating the AWS environment and configuring the necessary settings, such as security groups, database credentials, and endpoint configurations, required careful attention to detail. Ensuring that the AWS infrastructure was set up to allow secure and efficient communication with our Node.js application became a pivotal aspect of the connectivity puzzle. + Furthermore, Sequelize, being a powerful Object-Relational Mapping (ORM) tool, introduced its own set of challenges. Mapping the database schema to Sequelize models, handling associations, and ensuring that Sequelize was configured correctly to interpret PostgreSQL-specific data types were crucial aspects. Dealing with intricacies in Sequelize's configuration, such as connection pooling and dialect-specific settings, added an additional layer of complexity. * Native React Issues + There were many deprecated and altered libraries, so as a first time learner it was very hard to adjust + Expo Go's default error is "Keep text between the tags", but this would be non-descriptive and be related to whitespace. VSCode would not notice and extensive debugging. * Deploying to Google Play (original plan) + :( what happened to free deployment to the Google Play store + After prepping our app for deployment we ran into a wall of a registration fee of $25, in the spirit of hackathons we decided this would not be a step we would take ## Accomplishments that we're proud of 🏆 Our proudest achievement lies in translating a visionary concept into reality. We embarked on a journey that started with a hand-drawn prototype and culminated in the development of a fully functional application ready for deployment on the Play Store. This transformative process showcases our dedication, creativity, and ability to bring ideas to life with precision and excellence. ## What we learned 🏫 * Nostalgia does not have to be sad * Brain chemistry is unique! How do we form memories and why we may forget some? :) ## What's next for MemoryLane 💭 What's next for MemoryLane is an exciting journey of refinement and expansion. Discussing with our fellow hackers we already were shown interest in a social media platform that wasn't user-curating centric. As this was the team's first time developing using React Native, we plan to gather user feedback to enhance the user experience and implement additional features that resonate with our users. This includes refining the media scrolling functionality, optimizing performance, and incorporating more interactive and nostalgic elements.
## Inspiration In many developed countries across the world, the population is rapidly aging. This poses a variety of issues to senior citizens, including social isolation, an overburdened healthcare system unable to meet their needs, and the widespread effects of neurodegenerative conditions. We aimed to build a solution which would address all three of these issues in a way which is easily accessible and empowering to senior citizens. ## What it does MemoryLane allows senior citizens to relive and share their cherished memories. The web application combines three main functionalities, which include a journaling and recall feature for important memories, an AI-powered match and chat system for users to discuss their experiences which are shared with other users, and an analytics dashboard which can be used by healthcare professionals to track key indicators of neurodegenerative conditions. Overall, MemoryLane allows users to not only keep their memories fresh but also weave a tapestry of connections with others with similar life experiences. ## How we built it In order to develop a clean and responsive front-end and versatile back-end, we used Reflex.dev to develop entirely in Python. We also used the InterSystems IRIS database to easily perform vector search as well as other database operations to support the backend functionalities required by MemoryLane. Additionally, we made use of the Together.AI inference API to generate embeddings to match users based on shared experiences, perform sentiment analysis to find trends within memory recall data, and to create sample data to test our web app with. Finally, we used Google Cloud to implement speech-to-text functionality to increase ease of access to our platform for senior citizens. The majority of our app was built with Python, with a little JavaScript. ## Challenges we ran into As 2 of our team members had never done full-stack dev before and one was attending his first hackathon, learning the nuances of new frameworks was initially a challenge, especially getting our environments set up. We’re incredibly grateful to the supportive mentors and sponsors for helping us get unstuck when we ran into issues, which indubitably helped us build our final product. ## Accomplishments that we're proud of We’re very proud of our clean, intuitive UI which aims to make the product as accessible as possible to our target audience, senior citizens. Additionally, we believe that MemoryLane is a truly unique product which fills a niche which hasn’t been focused on before social media for the elderly, especially in combination with its potential benefits of improving the healthcare industry by aggregating data about the elderly. Also, half of our team was able to go from near-zero web dev knowledge to familiarity with important tools and techniques, which we thought was very representative of the spirit of hackathons – coming together to meet new people and learn new things in a fast-paced creative environment. ## What we learned Our journey with MemoryLane has been an enlightening dive into several new technologies. We harnessed the power of Reflex.dev for frontend and full stack development, explored the nuances in our data with InterSystems IRIS’s vector search on text embeddings from TogetherAI, and learned how to bring text to life with Google Cloud. Together AI has also become our ally in understanding our users' needs and narratives with natural language processing. ## What's next for MemoryLane Looking to the horizon, we are definitely looking into expanding MemoryLane’s reach. Our roadmap includes scaling our solution and refining our data model to improve performance, and looking into business models which are sustainable and align with our mission. We envision forming partnerships with healthcare providers, memory care centers, and senior living communities. Integrating IoT could also redefine ease of use for seniors. Keeping innovation in mind, we'll dive deeper into Reflex's capabilities and explore bespoke AI models with Together AI. We aim to improve the technical aspects of our platform as well, including venturing into voice tone analysis to add another layer of emotional intelligence to our app. **We believe that MemoryLane is not just a walk in the past – it's a stride into the future of senior healthcare.**
## Inspiration Behind DejaVu 🌍 The inspiration behind DejaVu is deeply rooted in our fascination with the human experience and the power of memories. We've all had those moments where we felt a memory on the tip of our tongues but couldn't quite grasp it, like a fleeting dream slipping through our fingers. These fragments of the past hold immense value, as they connect us to our personal history, our emotions, and the people who have been a part of our journey. 🌟✨ We embarked on the journey to create DejaVu with the vision of bridging the gap between the past and the present, between what's remembered and what's forgotten. Our goal was to harness the magic of technology and innovation to make these elusive memories accessible once more. We wanted to give people the power to rediscover the treasures hidden within their own minds, to relive those special moments as if they were happening all over again, and to cherish the emotions they evoke. 🚀🔮 The spark that ignited DejaVu came from a profound understanding that our memories are not just records of the past; they are the essence of our identity. We wanted to empower individuals to be the architects of their own narratives, allowing them to revisit their life's most meaningful chapters. With DejaVu, we set out to create a tool that could turn the faint whispers of forgotten memories into vibrant, tangible experiences, filling our lives with the warmth of nostalgia and the joy of reconnection. 🧠🔑 ## How We Built DejaVu 🛠️ It all starts with the hardware component. There is a video/audio-recording Python script running on a laptop, to which a webcam is connected. This webcam is connected to the user's hat, which they wear on their head and it records video. Once the video recording is stopped, the video is uploaded to a storage bucket on Google Cloud. 🎥☁️ The video is retrieved by the backend, which can then be processed. Vector embeddings are generated for both the audio and the video so that semantic search features can be integrated into our Python-based software. After that, the resulting vectors can be leveraged to deliver content to the front-end through a Flask microservice. Through the Cohere API, we were able to vectorize audio and contextual descriptions, as well as summarize all results on the client side. 🖥️🚀 Our front-end, which was created using Next.js and hosted on Vercel, features a landing page and a search page. On the search page, a user can search a query for a memory which they are attempting to recall. After that, the query text is sent to the backend through a request, and the necessary information relating to the location of this memory is sent back to the frontend. After this occurs, the video where this memory occurs is displayed on the screen and allows the user to get rid of the ominous feeling of déjà vu. 🔎🌟 ## Challenges We Overcame at DejaVu 🚧 🧩 Overcoming Hardware Difficulties 🛠️ One of the significant challenges we encountered during the creation of DejaVu was finding the right hardware to support our project. Initially, we explored using AdHawk glasses, which unfortunately removed existing functionality critical to our project's success. Additionally, we found that the Raspberry Pi, while versatile, didn't possess the computing power required for our memory time machine. To overcome these hardware limitations, we had to pivot and develop Python scripts for our laptops, ensuring we had the necessary processing capacity to bring DejaVu to life. This adaptation proved to be a critical step in ensuring the project's success. 🚫💻 📱 Navigating the Complex World of Vector Embedding 🌐 Another formidable challenge we faced was in the realm of vector embedding. This intricate process, essential for capturing and understanding the essence of memories, presented difficulties throughout our development journey. We had to work diligently to fine-tune and optimize the vector embedding techniques to ensure the highest quality results. Overcoming this challenge required a deep understanding of the underlying technology and relentless dedication to refining the process. Ultimately, our commitment to tackling this complexity paid off, as it is a crucial component of DejaVu's effectiveness. 🔍📈 🌐 Connecting App Components and Cloud Hosting with Google Cloud 🔗 Integrating the various components of the DejaVu app and ensuring seamless cloud hosting were additional challenges we had to surmount. This involved intricate work to connect user interfaces, databases, and the cloud infrastructure with Google Cloud services. The complexity of this task required meticulous planning and execution to create a cohesive and robust platform. We overcame these challenges by leveraging the expertise of our team and dedicating considerable effort to ensure that all aspects of the app worked harmoniously, providing users with a smooth and reliable experience. 📱☁️ ## Accomplishments We Celebrate at DejaVu 🏆 🚀 Navigating the Hardware-Software Connection Challenge 🔌 One of the most significant hurdles we faced during the creation of DejaVu was connecting hardware and software seamlessly. The integration of our memory time machine with the physical devices and sensors posed complex challenges. It required a delicate balance of engineering and software development expertise to ensure that the hardware effectively communicated with our software platform. Overcoming this challenge was essential to make DejaVu a user-friendly and reliable tool for capturing and reliving memories, and our team's dedication paid off in achieving this intricate connection. 💻🤝 🕵️‍♂️ Mastering Semantic Search Complexity 🧠 Another formidable challenge we encountered was the implementation of semantic search. Enabling DejaVu to understand the context and meaning behind users' search queries proved to be a significant undertaking. Achieving this required advanced natural language processing and machine learning techniques. We had to develop intricate algorithms to decipher the nuances of human language, ensuring that DejaVu could provide relevant results even for complex or abstract queries. This challenge was a testament to our commitment to delivering a cutting-edge memory time machine that truly understands and serves its users. 📚🔍 🔗 Cloud Hosting and Cross-Component Integration 🌐 Integrating the various components of the DejaVu app and hosting data on Google Cloud presented a multifaceted challenge. Creating a seamless connection between user interfaces, databases, and cloud infrastructure demanded meticulous planning and execution. Ensuring that the app operated smoothly and efficiently, even as it scaled, required careful design and architecture. We dedicated considerable effort to overcome this challenge, leveraging the robust capabilities of Google Cloud to provide users with a reliable and responsive platform for preserving and reliving their cherished memories. 📱☁️ ## Lessons Learned from DejaVu's Journey 📚 💻 Innate Hardware Limitations 🚀 One of the most significant lessons we've gleaned from creating DejaVu is the importance of understanding hardware capabilities. We initially explored using Arduinos and Raspberry Pi's for certain aspects of our project, but we soon realized their innate limitations. These compact and versatile devices have their place in many projects, but for a memory-intensive and complex application like DejaVu, they proved to be improbable choices. 🤖🔌 📝 Planning Before Executing 🤯 A crucial takeaway from our journey of creating DejaVu was the significance of meticulous planning for user flow before diving into coding. There were instances where we rushed into development without a comprehensive understanding of how users would interact with our platform. This led to poor systems design, resulting in unnecessary complications and setbacks. We learned that a well-thought-out user flow and system architecture are fundamental to the success of any project, helping to streamline development and enhance user experience. 🚀🌟 🤖 Less Technology is More Progress💡 Another valuable lesson revolved around the concept that complex systems can often be simplified by reducing the number of technologies in use. At one point, we experimented with a CockroachDB serverless database, hoping to achieve certain functionalities. However, we soon realized that this introduced unnecessary complexity and redundancy into our architecture. Simplifying our technology stack and focusing on essential components allowed us to improve efficiency and maintain a more straightforward and robust system. 🗃️🧩 ## The Future of DejaVu: Where Innovation Thrives! 💫 🧩 Facial Recognition and Video Sorting 📸 With our eyes set on the future, DejaVu is poised to bring even more remarkable features to life. This feature will play a pivotal role in enhancing the user experience. Our ongoing development efforts will allow DejaVu to recognize individuals in your video archives, making it easier than ever to locate and relive moments featuring specific people. This breakthrough in technology will enable users to effortlessly organize their memories, unlocking a new level of convenience and personalization. 🤳📽️ 🎁 Sharing Memories In-App 📲 Imagine being able to send a cherished memory video from one user to another, all within the DejaVu platform. Whether it's a heartfelt message, a funny moment, or a shared experience, this feature will foster deeper connections between users, making it easy to celebrate and relive memories together, regardless of physical distance. DejaVu aims to be more than just a memory tool; it's a platform for creating and sharing meaningful experiences. 💌👥 💻 Integrating BCI (Brain-Computer Interface) Technology 🧠 This exciting frontier will open up possibilities for users to interact with their memories in entirely new ways. Imagine being able to navigate and interact with your memory archives using only your thoughts. This integration could revolutionize the way we access and relive memories, making it a truly immersive and personal experience. The future of DejaVu is all about pushing boundaries and providing users with innovative tools to make their memories more accessible and meaningful. 🌐🤯
partial
## Inspiration It can be tough coming up with a unique recipe each and every week. Sometimes there are good deals for specific items (especially for university students) and determining what to cook with those ingredients may not be known. *Rad Kitchen says goodbye to last-minute trips to the grocery store and hello to delicious, home-cooked meals with the Ingredient Based Recipe Generator chrome extension.* ## What it does Rad Kitchen is a Google Chrome extension, the ultimate tool for creating delicious recipes with the ingredients you already have on hand. This Chrome Extension is easy to install and is connected to Radish's ingredient website. By surfing and saving ingredients of interests from the Radish website, the user can store them in your personal ingredient library. The extension will then generate a recipe based on the ingredients you have saved and provide you with a list of recipes that you can make with the ingredients you already have. You can also search for recipes based on specific dietary restrictions or cuisine type. It gives a final image that shows what the dish may look like. ## How we built it * Google Chrome extension using the React framework. The extension required a unique manifest.json file specific to Google Chrome extensions. * Cohere NLP to take user input of different ingredients and generate a recipe. * OpenAI's API to generate an image from text parameters. This creates a unique image to the prompt. * Material UI and React to create an interactive website. * Integrated Twilio to send the generated recipe and image via text message to the user. The user will input their number and Twilio API will be fetched. The goal is to create a more permanent place after the recipe is generated for people to refer to ## Challenges we ran into * Parsing data - some issues with the parameters and confusion with objects, strings, and arrays * Dealing with different APIs was a unique challenge (Dalle2 API was more limited) * One of our group members could not make it to the event, so we were a smaller team * Learning curve for creating a Chrome Extension * Twilio API documentation * Cohere API - Determining the best way to standardize message output while getting unique responses ## Accomplishments that we're proud of * This was our first time building a Google Chrome extension. The file structure and specific-ness of the manifest.json file made it difficult. Manifest v3 quite different from Manifest v2. * For this hackathon, it was really great to tie our project well with the different events we applied for ## What we learned * How to create a Google Chrome extension. It cannot be overstated how new of an experience it was, and it is fascinating how useful a chrome extension can be as a technology * How to do API calls and the importance of clear function calls ## What's next for Rad Kitchen * Pitching and sharing the technology with Radish's team at this hackathon
## Inspiration In the fast-paced world of networking and professional growth, connecting with students, peers, mentors, and like-minded individuals is essential. However, the need to manually jot down notes in Excel or the risk of missing out on valuable follow-up opportunities can be a real hindrance. ## What it does Coffee Copilot transcribes, summarizes, and suggests talking points for your conversations, eliminating manual note-taking and maximizing networking efficiency. Also able to take forms with genesys. ## How we built it **Backend**: * Python + FastAPI was used to serve CRUD requests * Cohere was used for both text summarization and text generation using their latest Coral model * CockroachDB was used to store user and conversation data * AssemblyAI was used for speech-to-text transcription and speaker diarization (i.e. identifying who is talking) **Frontend**: * We used Next.js for its frontend capabilities ## Challenges we ran into We ran into a few of the classic problems - going in circles about what idea we wanted to implement, biting off more than we can chew with scope creep and some technical challenges that **seem** like they should be simple (such as sending an audio file as a blob to our backend 😒). ## Accomplishments that we're proud of A huge last minute push to get us over the finish line. ## What we learned We learned some new technologies like working with LLMs at the API level, navigating heavily asynchronous tasks and using event-driven patterns like webhooks. Aside of technologies, we learned how to disagree but move forwards, when to cut our losses and how to leverage each others strengths! ## What's next for Coffee Copilot There's quite a few things on the horizon to look forwards to: * Adding sentiment analysis * Allow the user to augment the summary and the prompts that get generated * Fleshing out the user structure and platform (adding authentication, onboarding more users) * Using smart glasses to take pictures and recognize previous people you've met before
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
## Our Project Our project aims to solve the problem of unavailability of parking spaces in crowded areas. Live feed is taken from a camera in the parking lot and is fed to a python code which uses opencv to detect any empty slots in parking lots. Then, information about the availability of parking spots is uploaded to a online real time database hosted on firebase. This enables the web application to access this data and inform users about the availability of free parking spots in real time. The application has an interface that draws the map of virtually any parking area. It is built with JavaScript and canvas. The interface allows us to easily map each parking slot and create a model for visual representation on the web. All slots are custom so it is easy to resize the shapes, delete and name the parking slots in accordance to the different lots around the locality. The map is easily uploaded and stored on Firebase. This map can be then used to view real time availability of the spots in a lot. This map can be embedded in websites and mobile based applications. This being just the first iteration of the map grid tool, in the future we aim to base the map on an AI base for minimal manual intervention. The website was created using django web frame. The website uses firebase and django's default sqlite for the backend and html, css and javascript on the front end.The home page shoes a picture of the parking lot and a link to see its live feed. Below we can see the total space in the parking lot the occupied spaces and even the free spaces which is being updated in real time. Below That is a bit about us and what part of the project did we work on. The website uses decorates to block unauthorized or unsubscribed users to access the dashboard or the overview page without a subscription. The subscribe page handles the payment process where user addition to their card details, enters their email where we send login where they can set up their account for premium use. The dashboard shows a brief overview of the location. The detail button takes takes the user to a web page with better description of the parking space with details about every parking space and if about their status. This is also updated in real time. The green row indicates free space whereas the red rows indicates occupied space. ## Our Aim We aim to deploy our project to at least one parking slot to test it out. This would enable us to identify other non-foreseen problems, experiment different angles and lighting conditions, and also to calibrate our detection parameters further to increase accuracy. After perfecting the application to one parking slot, we aim to slowly increase our reach. The web application is already set up to accept data for multiple parking lots and display it. ## Future Improvements Currently the car detection system is only optimized for the used video and different lighting conditions might yield a lower accuracy rate. So, we aim to change our approach to detect cars by training a model to detect cars. Furthermore, we have also manually set the coordinates of the parking slots by drawing masks over the picture. This process can also be further improved by using line detection systems to automatically detect and mark parking slots. Our current application implements the use of cryptocurrency. But the user has to manually transfer the amount to the given wallet address. We are about 90% done with integrating the Pi SDK to our application to make payments more efficient.
## Inspiration In India, there are a lot of cases of corruption in the medicine industry. Medicines are generally expensive and not accessible to people from financially backward households. As a result, there are many unnecessary deaths caused by lack of proper treatment. People who buy the medicines at high costs are generally caught in debt traps and face a lifetime of discomfort. Every year, the government spends 95 million CAD on medicines from private medical stores to provide it to ration shops in rural areas that sell it for cheap prices to poor people. However, private medical store owners bribe government hospital doctors to prescribe medicines which can only be found in private stores, thus causing a loss for the government and creating debt traps for the people. ## What it does Every medicine has an alternative to it. This is measure by the salts present in it. Our app provides a list of alternative medicines to every medicine prescribed by doctors to give patients a variety to choose from and go to ration stores who might have the alternative at cheaper prices. This information is retrieved from the data sets of medicines by the Medical Authority of India. ## How we built it We built the front end using Swift, Figma and Illustrator. For the backend, we used Google firebase and SQL for collecting information about medicines from official sources. We also used the Google Maps API and the Google Translate API to track the location of the patient and provide information in his own language/dialect. ## Challenges we ran into ## Accomplishments that we're proud of We are proud of coming up with an idea that solves problems for both the government and the patients by solving corruption. We are also proud of our ability to think of and create such an elaborate product in a restricted time frame. ## What's next for Value-Med If we had more time, we would have integrated Voiceflow as well for people who did not know how to read so that they could receive voice commands for navigating the application.
## Inspiration The quest for finding parking in a busy city has never been harder. Do you drive around hoping to find an open spot, or do you finesse your way into a spot and hope you don't get towed? Our team wanted an Airbnb style service for street parking to alleviate the woes of the common driver, and thus InDemandParking was born. We developed InDemandParking to connect people looking for parking spots with people that are about to leave their spot, streamlining the search for parking and giving users more time for that concert or fancy dinner that they planned out. ## What it does Users looking to make a few quick bucks that are about to leave their parking spot can list their spot on the app. Users looking for parking at a specific time mark their destination on a map, pulling up a list of spots posted by others. For a small fee determined by us using a dynamic-pricing model, they can reserve the spot, and a percentage of the fee will go towards the owner that posted the parking spot. With this setup, people save time looking for parking for a small fee, while other users about to leave their parking spots can earn a few bucks at no extra cost to their schedule. In addition to reservations, data collected in the form of search queries and spot listings from users is used to calculate "hotzones" of parking, where there are likely to be lots of available parking. People that don't want to pay to reserve a spot can use these recommendations to drive towards a general area to find parking. Looking for parking has never been easier. In addition, we have also integrated the Ford API provided to seamlessly transition the app from your phone to the comfort of your Ford personal vehicle. Amazing :o ## How we built it The majority of the backend work consisted of creating a database for users and parking spot listings, determining clusters for available parking spaces, and handling requests from users. We used Java due to its scalability. For data analysis, we used Python and Flask to connect to our machine learning microservice. We also used Spring cuz its kinda nice. Due to the scarcity of pretty languages, we had no choice but to stick with React for our front end work. ## Challenges we ran into React would not cooperate. ML libraries were hard to pick up. ## Accomplishments that we're proud of Being able to build such a comprehensive project in the small timeframe that we were given has left us all with a huge sense of fulfillment. We hope that the app is able to impact the lives of many, whether it be saving them precious time or bringing them a few dollars closer to the three comma club. ## What we learned Frontend is a pain haha... ## What's next for InDemandParking Emerging global markets, generalization for other services such as restaurant reservation or cafe seats, and a move away from React. Also Docusign API integration for that $1000 cash consideration.
partial
## Inspiration We wanted to make energy monitoring accessible and simple and allow the user to develop greater total control over their energy consumption. ## What it does We have built a cloud-enabled meter that measures energy consumption per appliance. It allows all the ## How we built it We have built the meter using Raspberry Pi and two Arduinos. Then we built a web app using express to demonstrate the data gathered from the meter. By using the web app, users can monitor their energy consumption in real-time, pay for their energy, take advantage of our state-of-the-art energy usage optimizer. ## Challenges we ran into * Making graphs display data in real-time * Building a secure payment system * Building energy usage optimizer ## Accomplishments that we're proud of Everything works Building a scalable application ## What we learned Using Raspberry Pi's wifi is not very optimal for real-time stats ## What's next for Electro * Implement machine learning into energy usage organizer * Store payment history in a blockchain
## Inspiration With IoT becoming more popular, we wanted to create a way for users of all types to be able to connect their home to the internet cheaply. Particle Photons run about ~$20 each, which can add up quickly when connecting multiple devices. ## What it does PickyHome aims to allow users to connect multiple objects to the internet with only one Particle Photon. This works by using IR relays, which are much cheaper than using multiple photons. The IR relays are connected to our demo lamps, and turns on when the Photon signals it. Our web app will allow the user to control their IoT device from their PC or mobile device. The user can also monitor the temperature and energy usage in their apartments. An easy to read chart will show the user real time data coming from the sensors, and will allow for user input to calculate the expected price of the user's energy bill. This allows the users to be more mindful of their energy usage to avoid unexpected charges. ## How we built it We tried to take as much advantage of free services as we could to build our project. This included using a free Heroku dyno to host our server and mLab to host our MongoDB database. The web application was built with Node.js on the back-end using the Express framework and HTML/CSS (Bootstrap) and JavaScript on the front-end. The Photon periodically took sensor readings and sent it to our server in a POST request, where it would then be written into our database. It could also retrieve that data to make graphs of the sensor readings over time, as well as calculate the total estimated energy costs. For the hardware side, we used a killawatt to read the changes in current from the power strip. From there, we sent the data over to our data base to be plotted using highcharts. The lights communicated with our photon using IR relays. The photon acted as a hub in all of this, and connected everything to our web app. ## Challenges we ran into We were a bit too ambitious in the beginning, and couldn't get to everything we set out for. We had a plan to incorporate the ability to parse voice input into commands that could control the various devices in the home, but that fell short as we later realized the added difficulty of that task. Also, some of the hardware didn't end up working as well as we were hoping it would. ## Accomplishments that we're proud of We made a hack consisting of hardware and software in a non trivial manner. Our team learned a lot during this process. Also, staying up for the majority of a 24-hour period is no easy task, and we're proud to have been able to continue working hard throughout that time. ## What we learned Our first time hacker team member learned basic web development (ie HTML, CSS, Bootstrap) Other team members gained familiarity with MongoDB and hardware. ## What's next for PickyHome We would love to be able to successfully add the voice commands feature to the project and fix up some other software issues. Also, there were some hardware related issues that could be improved upon.
## Inspiration Throughout high school, our team and other friends regularly called each other for many purposes. However, our parents coming into our rooms for various reasons would always lead to embarrassing moments in the call. Moving forward into university and starting our careers, the concurrent Covid-19 pandemic has amplified this problem to all workers around the world. We developed Hush to tackle concurrent and past issues that we faced in the past. ## What it does Hush is an application that makes sure your mic listens to only what you want it to. It connects a mobile device to your desktop and mutes our computer microphone when the mobile device detects movement of the door to your room. ## How we built it Hush was built using the Swift and integrations with Slack were deployed using autocode. ## Challenges we ran into Our inexperience building macOS applications made it a challenge to get the application working with smooth animations. Networking between iOS and macOS with MultipeerConnectivity with auto connection was quite challenging. ## Accomplishments that we're proud of As first time hackers, we’re really proud to have successfully made something that works and is practical. ## What we learned This was the first time our team had used autocode but we found it to be such an amazing platform. We were able to get the Slack bot connected to our Slack channel rather quickly. Setting up the webhook was also very simple. Many of us see ourselves using autocode in the future for fast prototyping and integrations. ## What's next for Hush This is a proof of concept that solves the problem at hand. However, many individuals will prefer to have their phones by their side. With this in mind, if we were to create a real, marketable product, it would be in the form of a small, single-button device that can be easily attached to any door. The device would be cheap to produce requiring only an accelerometer and Bluetooth pairing capabilities.
losing
## Inspiration Research has shown us that new hires, women and under-represented minorities in the workplace could feel intimidated or uncomfortable in team meetings. Since the start of remote work, new hires lack the in real life connections, are unable to take a pulse of the group and are fearful to speak what’s on their mind. Majority of the time this is also due to more experienced individuals interrupting them or talking over them without giving them a chance to speak up. This feeling of being left out often makes people not contribute to their highest potential. Links to the reference studies and articles are at the bottom. As new hire interns every summer, we personally experienced the communication and participation problem in team meetings and stand ups. We were new and felt intimidated to share our thoughts in fear of them being dismissed or ignored. Even though we were new hires and had little background, we still had some sound ideas and opinions to share that were instead bottled up inside us. We found out that the situation is the same for women in general and especially under-represented minorities. We built this tool for ourselves and to those around us to feel comfortable and inclusive in team meetings. Companies and organizations must do their part in ensuring that their workplace is an inclusive community for all and that everyone has the opportunity to participate equally in their highest potential. With the pandemic and widespread adoption of virtual meetings, this is an important problem globally that we must all address and we believe Vocal aims to help solve it. ## What it does Vocal empowers new hires, women, and under-represented minorities to be more involved and engaged in virtual meetings for a more inclusive team. Google Chrome is extremely prevalent and our solution is a proof-of-concept Chrome Extension and Web Dashboard that works with Google Meet meetings. Later we would support others platforms such as Zoom, Webex, Skype, and others. When the user joins the Google Meet meeting, our Extension automatically detects it and collects statistics regarding the participation of each team member. A percentage is shown next to their name to indicate their contribution and also a ranking is shown that indicates how often you spoke compared to others. When the meeting ends, all of this data is sent to the web app dashboard using Google Cloud and Firebase database. On the web app, the users can see their participation in the current meeting and progress from the past meetings with different metrics. Plants are how we gamify participation. Your personal plant grows, the more you contribute in meetings. Meetings are organized through sprints and contribution throughout the sprint will be reflected in the growth of the plant. **Dashboard**: You can see your personal participation statistics. It show your plant, monthly interaction level graph, percent interaction with other team members (how often and which teammates you piggy back on when responding). Lastly, it also has Overall Statistics such as percent increase in interactions compared to last week, meeting participation streak, average engagement time, and total time spoken. You can see your growth in participation reflected in the plant growth. **Vocal provides lots of priceless data for the management, HR, and for the team overall to improve productivity and inclusivity.** **Team**: Many times our teammates are stressed or go through other feelings but simply bottle it up. In the Team page, we provide Team Sentiment Graph and Team Sentiments. The graphs shows how everyone in the team has been feeling for the current sprint. Team members would check in anonymously at the end of the every week on how they’re feeling (Stressed, Anxious, Neutral, Calm, Joyful) and the whole team can see it. If someone’s feeling low, other teammates can reach out anonymously in the chat and offer them support and they both can choose to reveal their identity if they want. **Feeling that your team cares about you and your mental health can foster an inclusive community.** **Sprints Garden**: This includes all of the previous sprints that you completed. It also shows the whole team’s garden so you can compare across teammates on how much you have been contributing relatively. **Profile**: This is your personal profile where you will see your personal details, the plants you have grown in the past over all the sprints you have worked on - your forest, your anonymous conversations with your team members. Your garden is here to motivate you and help you grow more plants and ultimately contribute more to meetings. **Ethics/Privacy: We found very interesting ways to collect speaking data without being intrusive. When the user is talking only the mic pulses are recorded and analyzed as a person spoken. No voice data or transcription is done to ensure that everyone can feel safe while using the extension.** **Sustainability/Social Good**: Companies that use Vocal can partner to plant the trees grown during sprints in real life by partnering with organizations that plant real trees under the corporate social responsibility (CSR) initiative. ## How we built it The System is made up of three independent modules. Chrome Extension: This module works with Google meet and calculates the statistics of the people who joined the meet and stores the information of the amount of time an individual contributes and pushes those values to the database. Firebase: It stores the stats available for each user and their meeting attended. Percentage contribution, their role, etc. Web Dashboard: Contains the features listed above. It fetches data from firebase and then renders it to display 3 sections on the portal. a. Personal Garden - where an individual can see their overall performance, their stats and maintain a personal plant streak. b. Group Garden - where you can see the overall performance of the team, team sentiment, anonymous chat function. After each sprint cycle, individual plants are added to the nursery. c. Profile with personal meeting logs, ideas and thoughts taken in real-time calls. ## Challenges we ran into We had challenges while connecting the database with the chrome extension. The Google Meet statistics was also difficult to do since we needed to find clever ways to collect the speaking statistics without infringing on privacy. Also, 36 hours was a very short time span for us to implement so many features, we faced a lot of time pressure but we learned to work well under pressure! ## Accomplishments that we're proud of This was an important problem that we all deeply cared about since we saw people around us face this on a daily basis. We come from different backgrounds, but for this project we worked as one team and used our expertise, and learned what we weren’t familiar with in this project. We are so proud to have created a tool to make under-represented minorities, women and new hires feel more inclusive and involved. We see this product as a tool we’d love to use when we start our professional journeys. Something that brings out the benefits of remote work, at the same time being tech that is humane and delightful to use. ## What's next for Vocal Vocal is a B2B product that companies and organizations can purchase. The chrome extension to show meeting participation would be free for everyone. The dashboard and the analytics will be priced depending on the company. The number of insights and data that can be extracted from one data point(user participation) will be beneficial to the company (HR & Management) to make their workplace more inclusive and productive. The data can also be analyzed to promote inclusion initiatives and other events to support new hires, women, and under-represented minorities. We already have so many use cases that were hard to build in the duration of the hackathon. Our next step would be to create a Mobile app, more Video Calling platform integrations including Zoom, Microsoft Teams, Devpost video call, and implement chat features. We also see this also helping in other industries like ed-tech, where teachers and students could benefit form active participation. ## References 1. <https://www.nytimes.com/2020/04/14/us/zoom-meetings-gender.html> 2. <https://www.nature.com/articles/nature.2014.16270> 3. ​​<https://www.fastcompany.com/3030861/why-women-fail-to-speak-up-at-high-level-meetings-and-what-everyone-can-do-about> 4. <https://hbr.org/2014/06/women-find-your-voice> 5. <https://www.cnbc.com/2020/09/03/45percent-of-women-business-leaders-say-its-difficult-for-women-to-speak-up-in-virtual-meetings.html>
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
## Inspiration Our goal as a team was to use emerging technologies to promote a healthy living through exclaiming the importance of running. The app would connect to the user’s strava account track running distance and routes that the user runs and incentivise this healthy living by rewarding the users with digital artwork for their runs. There is a massive runners’ community on strava where users regularly share their running stats as well as route maps. Some enthusiasts also try to create patterns with their maps as shown in this reddit thread: <https://www.reddit.com/r/STRAVAart/>. Our app takes this route image and generates an NFT for the user to keep or sell on OpenSea. ## What it does ruNFT’s main goal is to promote a healthy lifestyle by incentivizing running. The app connects to users’ Strava accounts and obtains their activity history. Our app uses this data to create an image of each run on a map and generates an NFT for the user. Additionally, this app builds a community of health enthusiasts that can view and buy each other’s NFT map collections. There are also daily, weekly, and all time leaderboards that showcase stats of the top performers on the app. Our goal is to use this leaderboard to derive value for the NFTs as users with the best stats will receive rarer, more valuable tokens. Overall, this app serves as a platform for runners to share their stats, earn tokens for living a healthy lifestyle, and connect with other running enthusiasts around the world. With the growing interest of NFTs booming in the blockchain market with many new individuals taking interest in collecting NFTs, runners can now use our app to create and access their NFTs while using it as motivation to improve their physical health. ## How we built it The front-end was developed using flutter. Initial sketches of how the user interface would look was conceptualized in photoshop where we decided on the color-scheme and the layout. We took these designs to flutter using some online tutorials as well as some acquired tips from the DeltaHacks Flutter workshop. Most of the main components in the front-end were buttons and a header for navigation as well as a form for some submissions regarding minting. The backend was hosted on Heroku and consisted of manipulating and providing data to/from the Strava API, our MongoDB database, in which we used express to serve the data. We also integrated the ability to automate the minting process in the backend by using web.js and the alchemy api. We simply initiate the mintNFT method from on smart contract, while passing a destination wallet address, this is how our users are able to view and receive their minted strava activities. ## Challenges we ran into One of the biggest challenges we ran into was merge conflicts. While GitHub makes it very easy to share and develop code with a group of people, it became hard to distribute who was coding what, oftentimes creating merge conflicts. At many times, this obstacle would take away from our precious time so our solution was to use a scrum process where we had a sprint to develop a certain feature for 2 hour sprints and meeting after using discord to keep ourselves organized. Other challenges that we faced included production challenges with Rinkeby TestNet where its servers were down for hours into Saturday halting our production significantly, however, we overcame that challenge by developing creative ways in the local environment to test our features. Finally, working with flutter being new to us was a challenge of its own, however, it became very annoying when implementing some of the backend features from the Strava API. ## Accomplishments that we're proud of We are really proud of how we used the emerging popularity of NFTs to promote running as now the users will have an incentive to go running and practise a healthier lifestyle essentially giving running a value. We are also really proud of learning flutter and other technologies we used for development that we were not really familiar with. As emerging software engineers, we understand that it will be very important to keep up with new software languages, technologies and methodologies, and this weekend, from what we accomplished by building an app using something none of us knew proves we can continue to adapt and grow as developers. ## What we learned The biggest point of learning for us was how to use flutter for mobile app development since none of us had used flutter before. We were able to do research and learn how the flutter environment works and how it can make it really easy to create apps. With our group's growing interest in NFTs and the NFT market we also learnt a few important things when it comes to creating NFTs and managing them and also what gives NFTs or digital artwork value. ## What's next for ruNFT There are many features that we would like to continue developing in the interface of the app itself. We believe that there is so much more that the app can do for the user. One of the primary motives we have is to create a page that allows the user to see their own collection from within the app as well as a feature such as a blog where stories of running and the experiences of the users can be posted like a feed. Since the app is focused around NFTs, we want to set up a place where NFTs can be sold and bought from within the app using current blockchain technologies and secure transactions. This can make it easier for newer users to operate selling and buying of NFTs easily and do not need to access other resources for this. All in all, we are proud of what we have accomplished and with the constant changes in the markets and blockchain technologies, there are so many more new things that will come for us to implement.
winning
## Inspiration As our climate changes, we need to continue adopting sustainable technologies for energy conversion. As materials scientists, we want to leverage cutting-edge scientific approaches to find the right materials for these needs. To mitigate carbon pollution, industries need to transition to batteries, the most efficient of which are lithium-ion batteries. As researchers, we use first principles atomistic modeling to understand materials properties, such as conductivity and stability; however, building these models is time consuming and presents significant learning curves for non-computational scientists. To bridge the gap between ideation and discovery for new lithium-ion battery opportunities, we fine-tuned an LLM model to automatically suggest structural input files for first principles modeling techniques from natural text descriptions. Our web app allows users to visualize the suggested structure, which is further optimized to reflect a real-world material. We believe that our tool will help scientists and researchers discover the best lithium-ion battery electrolytes to compete in the energy industry! ## What it does Researchers input a new lithium-ion battery electrolyte material and its space group. A natural text description is generated via a rule-based crystallography tool. This description is fed into a Llama model that is fine-tuned to generate the initial suggested structure file (in VASP POSCAR format) for atomistic modeling. This structure is then relaxed using a DFT-based GNN model to reflect the closest physically possible structure. The LLM-suggested structure and the DFT-relaxed structure are visualized for comparison. Finally, the stability (i.e., its viability as a lithium-ion electrolyte) of this candidate material is calculated and depicted graphically, which a user can then compare to other materials' stabilities. ## How we built it We queried the Materials Project for a dataset of battery materials and released publicly on HuggingFaceHub. Using a Gaudi card from the Intel Cloud, we fine-tuned a Llama-2-7B model that converts natural text to modeling input files that reflect atomic positions (POSCAR file). We then used existing pre-trained GNN models (multi-atomic cluster expansion AKA MACE) to relax the LLM-generated structure to be more physically reasonable. We built a Dash app for users to explore the lithium-ion battery dataset and our structures generated by LLMs. Out dataset and models can be found at Hugging Face Hub! Dataset: [MaterialsAI/robocr\_poscar\_2col](https://huggingface.co/datasets/MaterialsAI/robocr_poscar_2col) Model: [MaterialsAI/robocr\_poscar\_2col\_llama](https://huggingface.co/MaterialsAI/robocr_poscar_2col_llama) We run our fine-tuned LLAMA LLM text-POSCAR on *One Intel Gaudi* at <http://146.152.224.107:8017/docs> We run our model for POSCAR-energy and optimization trajectory on *One Intel Gaudi* at <http://146.152.224.107:8017/docs> ## Challenges we ran into Materials dataset complexity is not immediately useful for LLM inference, so this required deep domain expertise in crystallography as well as proper LLM prompt engineering. DFT input files for complex materials contain numerous atomic coordinates that are based on crystallographic rules and set based on symmetry of crystals; however, for LLM doing text-based prediction accurately setting these sites is challenging. This difficulty is why we included a relaxation step to modify the LLM-generated structures to something more physically reasonable. ## Accomplishments that we're proud of Contributing to the discovery of new lithium-ion battery materials to fight climate change. Fusing generative AI, LLMs, and complex atomistic modeling. Building an intuitive user interface that will help scientists overcome learning barriers for materials modeling. Developing a complex end-to-end pipeline from unstructured text input to LLM to GNN to final materials visualization and evaluation that is usable by scientists. ## What we learned Generative AI can be effectively adopted into a specific scientific domains. LLMs can be used as a starting point to generate possible materials structures, even if they require further optimization. We learned how to leverage Intel Development Cloud and Gaudi cards to fine-tune LLMs and perform further inference. ## What's next for Batteries by LLM We want to extend our pipeline beyond Li-ion materials to Na- and Mg-based batteries, as well as other sustainable technology sectors like solar energy and soft materials.
## Inspiration While we were thinking about the sustainability track, we realized that one of the biggest challenges faced by humanity is carbon emissions, global warming and climate change. According to Dr.Fatih Birol, IEA Executive Director - *"Global carbon emissions are set to jump by 1.5 billion tonnes this year. This is a dire warning that the economic recovery from the Covid crisis is currently anything but sustainable for our climate."* With this concern in mind we decided to work on a model which could possibly be a small compact carbon capturing system to reduce the carbon footprint around the world. ## What it does The system is designed to capture CO2 directly from the atmosphere using microalgae as our biofilter. ## How we built it Our plan was to first develop a design that could house the microalgae. We designed a chamber in Fusion 360 which we later 3D printed to house the microalgae. The air from the surroundings is directed into the algal chamber using an aquarium aerator. The pumped in air moves into the algal chamber through an air stone bubble diffuser which allows the air to break into smaller bubbles. These smaller air bubbles make the CO2 sequestration easier by giving the microalgae more time to act upon it. We have made a spiral design inside the chamber so that the bubbles travel upward through the chamber in a spiral fashion, giving the microalgae even more time to act upon it. This continuous process in due course would lead to capturing of CO2 and production of oxygen. ## Challenges we ran into 3D printing the parts of the chamber within the specified time. Getting our hands on enough micro algae to fill up the entire system in its optimal growth period (log phase) for the best results. Making the chamber leak proof. ## Accomplishments that we're proud of The hardware design that we were able to design and build over the stipulated time. Develop the system which could actually bring down CO2 levels by utilizing the unique side of microalgae. ## What we learned We came across a lot of research papers implicating the best use of microalgae in its role to capture CO2. Time management: Learnt to design and develop a system from scratch in a short period. ## What's next for Aria We plan to conduct more research using microalgae and enhance the design of the existing system we built so that we could increase the carbon capture efficiency of the system. Keeping in mind the deteriorating indoor air quality, we also plan to integrate it with the inorganic air filters so that it could help in improving the overall indoor air quality. We also plan to conduct research on finding out how much area does one unit of Aria covers
# LLaMP - Large Language model Made Powerful 🦙🔮 ### **Introducing LLaMP: Large Language model Made Powerful 🚀** We are sorry! LLaMP is actually a homonym of **Large Language model [Materials Project](https://materialsproject.org)**. 😉 It empowers LLMs with scientific knowledge and reduces the likelihood of hallucination for materials data distillation. LLaMP is a web-based assistant that allows you to explore and interact with materials data in a conversational and intuitive manner. It integrates the power of the Materials Project API and the intelligence of OpenAI's GPT-3.5 to offer a comprehensive and user-friendly solution for discovering and understanding computational materials data based on quantum mechanical calculations. **Click [here](https://docs.google.com/presentation/d/e/2PACX-1vR1LjNO2gp_jVUkIX4qxdkAC0Q9PJ4c2vOvNY2HP6-HjlZCOAdiciw8yTpgZvpw9-a9tF7qT8oC6ntV/pub?start=false&loop=false&delayms=3000) for introduction slides.** [![](https://matsci.org/uploads/default/original/1X/2cd38ebab6f6a0d889a744bcfde93f3b6f55a3bf.png)](https://materialsproject.org) [![](https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fwww.jorgecantero.es%2Fwp-content%2Fuploads%2F2023%2F04%2Flangchain.png&f=1&nofb=1&ipt=d6b329023cab24afd6e940c0a602c74f733998debc66b673e1e0c2d476bc2917&ipo=images)](https://github.com/langchain-ai/langchain) [![](https://external-content.duckduckgo.com/iu/?u=https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Ftse4.mm.bing.net%2Fth%3Fid%3DOIP.waoGGtrsqNlS8r6UtDJ-wQHaEK%26pid%3DApi&f=1&ipt=b4b31de0a557c40d6f7f6ad597037aeedced420ed54a13a4dc38f9b51ac77da3&ipo=images)](https://openai.com/) [![](https://raw.githubusercontent.com/janosh/elementari/main/static/favicon.svg)](https://elementari.janosh.dev/)[![](https://raw.githubusercontent.com/sveltejs/branding/master/svelte-horizontal.svg)](https://svelte.dev//) ## 🔮 Introduction Discovering and understanding materials is a cornerstone of innovation across various industries, from electronics and energy to healthcare and beyond. However, navigating the vast landscape of materials data and scientific information can be a challenging task. That's where LLaMP steps in – a groundbreaking smart agent designed to revolutionize the way we explore and interact with materials information. LLaMP seamlessly integrates the power of the Materials Project API and the intelligence of OpenAI's GPT-3.5 to offer a comprehensive and intuitive solution for exploring, querying, and understanding materials data. Whether you're a seasoned materials scientist or an enthusiast curious about the properties of different materials, LLaMP empowers you with a dynamic and user-friendly platform to uncover valuable insights and answers. **🔎 Key Features of LLaMP** 1. **Natural Language Interaction:** Say goodbye to complex queries and technical jargon. LLaMP understands human language, allowing you to communicate your materials-related questions in a conversational and intuitive manner. 2. **Expertly Curated Data:** By harnessing the capabilities of the Materials Project API, LLaMP provides access to a vast repository of materials data, including **composition**, **crystal structures**, and **magnetism**, **synthesis recipes**, and more. 3. **Intelligent Responses:** Powered by OpenAI's GPT-3.5, LLaMP not only retrieves data but also delivers insightful and informative responses in plain language, making complex materials concepts easy to comprehend. 4. **Effortless Exploration:** Whether you're seeking materials with specific properties, analyzing trends, or comparing compositions, LLaMP streamlines the exploration process, ensuring you find the information you need quickly. 5. **Custom Functionality:** LLaMP's innovative design enables you to leverage predefined functions tailored to materials research. These functions allow you to retrieve, filter, and analyze materials data in a structured and efficient manner. 6. **Personalized Experience:** LLaMP adapts to your preferences, learning from each interaction to provide increasingly accurate and relevant responses over time. 7. **Seamless Integration:** As a web-based assistant, LLaMP is accessible from anywhere, eliminating the need for complicated installations or setups. Whether you're a researcher, engineer, student, or anyone with a curiosity about materials, LLaMP is your indispensable companion on the journey of material exploration. It transforms the way we access and engage with materials data, making the pursuit of scientific knowledge more accessible and enjoyable than ever before. Experience the future of materials exploration with LLaMP – your intelligent guide to the world of materials science and discovery.
partial
## Inspiration Our idea was inspired by our group's shared interest in musical composition, as well as our interests in AI models and their capabilites. The concept that inspired our project was: "*What if life had a soundtrack?*" ## What it does AutOST generates and produces a constant stream of original live music designed to automatically adjust to and accompany any real-life scenario. ## How we built it We built our project in python, using the Mido library to send note signals directly to FL studio, allowing us to play constant audio without a need to export to a file. The whole program is linked up to a live video feed that uses Groq AI's computer vision api to determine the mood of an image and adjust the audio accordingly. ## Challenges we ran into The main challenge we faced in this project is the struggle that came with making the generated music not only sound coherent and good, but also have the capability to adjust according to parameters. Turns out that generating music mathematically is more difficult than it seems. ## Accomplishments that we're proud of We're proud of the fact that our program's music sounds somewhat decent, and also that we were able to brainstorm a concept that (to our knowlege) has not really seen much experimentation. ## What we learned We learned that music generation is much harder than we initially thought, and that AIs aren't all that great at understanding human emotions. ## What's next for AutOST If we continue work on this project post-hackathon, the next steps would be to expand its capabilities for recieving input, allowing it to do all sorts of amazing things such as creating a dynamic soundtrack for video games, or integrating with smart headphones to create tailored background music that would allow users to feel as though they are living inside a movie.
## Inspiration Small scale braille printers cost between $1800 and $5000. We think that this is too much money to spend for simple communication and it has acted as a barrier for many blind people for a long time. We plan to change this by offering a quick, affordable, precise solution to this problem. ## What it does This machine will allow you to type a string (word) on a keyboard. The raspberry pi then identifies what was entered and then controls the solenoids and servo to pierce the paper. The solenoids do the "printing" while the servo moves the paper. A close-up video of the solenoids running: <https://www.youtube.com/watch?v=-jSG96Br3b4> ## How we built it Using a raspberry pi B+, we created a script in python that would recognize all keyboard characters (inputted as a string) and output the corresponding Braille code. The raspberry pi is connected to 4 circuits with transistors, diodes and solenoids/servo motor. These circuits control the how the paper is punctured (printed) and moved. The hardware we used was: 4x 1n4004 diodes, 3 ROB-11015 solenoids, 4 TIP102 transistors, a Raspberry Pi B+, Solarbotic's GM4 servo motor, its wheel attachment, a cork board, and a bunch of Lego. ## Challenges we ran into The project initially had many hardware/physical problems which caused errors while trying to print braille. The solenoids were required to be in a specific place in order for it to pierce paper. If the angle was incorrect, the pins would break off or the paper stuck to them. We also found that the paper would jam if there were no paper guards to hold the paper down. ## Accomplishments that we are proud of We are proud of being able to integrate hardware and software into our project. Despite being unfamiliar with any of the technologies, we were able to learn quickly and create a fun project that will make a difference in the world. ## What we learned None of us had any knowledge of python, raspberry pi, or how solenoids functioned. Now that we have done this project, we are much more comfortable in working with these things. ## What's next for Braille Printer We were only able to get one servo motor which meant we could only move paper in one direction. We would like to use another servo in the future to be able to print across a whole page.
## Motivation Our motivation was a grand piano that has sat in our project lab at SFU for the past 2 years. The piano was Richard Kwok's grandfathers friend and was being converted into a piano scroll playing piano. We had an excessive amount of piano scrolls that were acting as door stops and we wanted to hear these songs from the early 20th century. We decided to pursue a method to digitally convert the piano scrolls into a digital copy of the song. The system scrolls through the entire piano scroll and uses openCV to convert the scroll markings to individual notes. The array of notes are converted in near real time to an MIDI file that can be played once complete. ## Technology The scrolling through the piano scroll utilized a DC motor control by arduino via an H-Bridge that was wrapped around a Microsoft water bottle. While the notes were recorded using openCV via a Raspberry Pi 3, which was programmed in python. The result was a matrix representing each frame of notes from the Raspberry Pi camera. This array was exported to an MIDI file that could then be played. ## Challenges we ran into The openCV required a calibration method to assure accurate image recognition. The external environment lighting conditions added extra complexity in the image recognition process. The lack of musical background in the members and the necessity to decrypt the piano scroll for the appropriate note keys was an additional challenge. The image recognition of the notes had to be dynamic for different orientations due to variable camera positions. ## Accomplishments that we're proud of The device works and plays back the digitized music. The design process was very fluid with minimal set backs. The back-end processes were very well-designed with minimal fluids. Richard won best use of a sponsor technology in a technical pickup line. ## What we learned We learned how piano scrolls where designed and how they were written based off desired tempo of the musician. Beginner musical knowledge relating to notes, keys and pitches. We learned about using OpenCV for image processing, and honed our Python skills while scripting the controller for our hack. As we chose to do a hardware hack, we also learned about the applied use of circuit design, h-bridges (L293D chip), power management, autoCAD tools and rapid prototyping, friction reduction through bearings, and the importance of sheave alignment in belt-drive-like systems. We also were exposed to a variety of sensors for encoding, including laser emitters, infrared pickups, and light sensors, as well as PWM and GPIO control via an embedded system. The environment allowed us to network with and get lots of feedback from sponsors - many were interested to hear about our piano project and wanted to weigh in with advice. ## What's next for Piano Men Live playback of the system
winning
## Inspiration To make workers at banks have easy access to calculations for taxes ## What it does calculates the taxes for clients who earn a certain income ## How we built it we start it off with a calculator that does basic operations ## Challenges we ran into the code was not working even after getting it checked and we took too much time researching the python language that we did not get time to properly look into how we should move forward with the code ## Accomplishments that we're proud of ## What we learned spend less time researching the language and more time on how the code works. Also, ask for help when needed ## What's next for python calculator
## Inspiration Many students (including us) are worried about their finances such as student loans and how/where to save money. ## What it does Web app that allows you to log your spending and input your monthly income and savings plan into your portfolio. Using this it will provide a summary of your transactions as well as some useful statistics including your remaining budget for the month, total spending so far, and categories sorted by most spent in. ## How we built it We built the front end using bootstrap and backend in flask with SQLalchemy to store user and transaction data. ## Challenges we ran into We planned on utilizing react.js libraries to handle front end, which would have been the design and structure of the website, as well as any graphs and charts we would need (essentially the user interface!). Unfortunately, after spending hours on react, we realized that given our time frame, it was more realistic to create simple HTML files and add react.js functionality later when we wanted to show graphs and charts. Unfortunately due to our limited time frame, we were able to successfully implement an HTML- based website, but react.js graphs and charts were still being built. ## Accomplishments that we're proud of This is the first time any of us has created a web application so we are proud to have learned the necessary frameworks and implemented them successfully. ## What we learned We learned to implement an HTML front end running off of a python back end, tied together using Flask and supported with the SQLalchemy database. ## What's next for Easy Money Incorporating more features such as providing more ways to view the data like graphs, email reminders and notifications about remaining allowance, overspending, payment due dates, and logging payments.
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
losing
## Inspiration We are all a fan of Carrot, the app that rewards you for walking around. This gamification and point system of Carrot really contributed to its success. Carrot targeted the sedentary and unhealthy lives we were leading and tried to fix that. So why can't we fix our habit of polluting and creating greenhouse footprints using the same method? That's where Karbon comes in! ## What it does Karbon gamifies how much you can reduce your CO₂ emissions by in a day. The more you reduce your carbon footprint, the more points you can earn. Users can then redeem these points at Eco-Friendly partners to either get discounts or buy items completely for free. ## How we built it The app is created using Swift and Swift UI for the user interface. We also used HTML, CSS and Javascript to make a web app that shows the information as well. ## Challenges we ran into Initially, when coming up with the idea and the economy of the app, we had difficulty modelling how points would be distributed by activity. Additionally, coming up with methods to track CO₂ emissions conveniently became an integral challenge to ensure a clean and effective user interface. As for technicalities, cleaning up the UI was a big issue as a lot of our time went into creating the app as we did not have much experience with the language. ## Accomplishments that we're proud of Displaying the data using graphs Implementing animated graphs ## What we learned * Using animation in Swift * Making Swift apps * Making dynamic lists * Debugging unexpected bugs ## What's next for Karbon A fully functional Web app along with proper back and forth integration with the app.
## Overview We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses. ## Inspiration Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out! ## What it does SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with. ## How we built it The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame. ## Challenges we ran into Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour. ## Accomplishments that we're proud of We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency. ## What we learned We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees) ## What's next for SmartEQ We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions. In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
## Inspiration **DronAR** was inspired by a love of cool technology. Drones are hot right now, and the question is, why not combine it with VR? The result is an awesome product that allows for drone management to be more **visually intuitive** letting users interact with drones in ways never done before. ## What it does **DronAR** allows users to view realtime information about their drones such as positional data and status. Using this information, users can make on the spot decisions of how to interact with their drown. ## How I built it Unity + Vuforia for AR. Node + Socket.IO + Express + Azure for backend ## Challenges I ran into C# is *beautiful* ## What's next for DronAR Adding SLAM in order to make it easier to interact with the AR items.
winning
## What it does Savings.io is a webapp designed to strategically help students and others set aside a small amount of money each month towards an emergency savings fund through the use of an extensive cost-of-living dataset. It also advocates for financial investment education by providing resources and links to guides and articles. Finally, it is able to return US securities determined through a ML sentiment analysis model (VADER) to be performing "well", which acts as a good starting point for beginner traders. ## How we built it Savings.io was build on python using Google Colab & Pycharm in a 36 hour period. ## Challenges we ran into The main challenge was finding a suitable database with enough information to query from in order to support our budgeting backend. We required an extensive cost-of-living dataset, which was uncommon on the internet. ## Accomplishments that we're proud of Integrating a working backend, specifically our budgeting backend and the investment portfolio backend, with an aesthetically-pleasing frontend developed on the Django framework, was something that was new to our entire team. Despite the learning curve, we are quite happy with the result! ## What we learned As mentioned, it was our first time completing a project from start-to-finish including both backend requests and frontend displays. We learned quite a bit about Django frameworks and API calls, which will definitely come in handy in the future. ## What's next for Savings.io Due to the time constraint, some of our planned features were not implemented. Notably, we wanted to incorporate a student's existing trading experience when return an investment portfolio and finance resources. Our intention was a customizable algorithm based on the user's skill level. Furthermore, UI improvements for the investment portfolio were planned, such as plots that detail historical data based on the recommended tickers, or areas where links to company financial statements would reside. Additionally, we hope to include more parameters in our budgeting portion of the program, such as a student's parental support/income, which would help us provide a more accurate cost-of-living, and thus a better suggestion for the amount of money saved towards the emergency fund.
## Inspiration Many students (including us) are worried about their finances such as student loans and how/where to save money. ## What it does Web app that allows you to log your spending and input your monthly income and savings plan into your portfolio. Using this it will provide a summary of your transactions as well as some useful statistics including your remaining budget for the month, total spending so far, and categories sorted by most spent in. ## How we built it We built the front end using bootstrap and backend in flask with SQLalchemy to store user and transaction data. ## Challenges we ran into We planned on utilizing react.js libraries to handle front end, which would have been the design and structure of the website, as well as any graphs and charts we would need (essentially the user interface!). Unfortunately, after spending hours on react, we realized that given our time frame, it was more realistic to create simple HTML files and add react.js functionality later when we wanted to show graphs and charts. Unfortunately due to our limited time frame, we were able to successfully implement an HTML- based website, but react.js graphs and charts were still being built. ## Accomplishments that we're proud of This is the first time any of us has created a web application so we are proud to have learned the necessary frameworks and implemented them successfully. ## What we learned We learned to implement an HTML front end running off of a python back end, tied together using Flask and supported with the SQLalchemy database. ## What's next for Easy Money Incorporating more features such as providing more ways to view the data like graphs, email reminders and notifications about remaining allowance, overspending, payment due dates, and logging payments.
## Overview We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses. ## Inspiration Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out! ## What it does SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with. ## How we built it The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame. ## Challenges we ran into Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour. ## Accomplishments that we're proud of We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency. ## What we learned We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees) ## What's next for SmartEQ We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions. In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
losing
## Inspiration While there are several applications that use OCR to read receipts, few take the leap towards informing consumers on their purchase decisions. We decided to capitalize on this gap: we currently provide information to customers about the healthiness of the food they purchase at grocery stores by analyzing receipts. In order to encourage healthy eating, we are also donating a portion of the total value of healthy food to a food-related non-profit charity in the United States or abroad. ## What it does Our application uses Optical Character Recognition (OCR) to capture items and their respective prices on scanned receipts. We then parse through these words and numbers using an advanced Natural Language Processing (NLP) algorithm to match grocery items with its nutritional values from a database. By analyzing the amount of calories, fats, saturates, sugars, and sodium in each of these grocery items, we determine if the food is relatively healthy or unhealthy. Then, we calculate the amount of money spent on healthy and unhealthy foods, and donate a portion of the total healthy values to a food-related charity. In the future, we plan to run analytics on receipts from other industries, including retail, clothing, wellness, and education to provide additional information on personal spending habits. ## How We Built It We use AWS Textract and Instabase API for OCR to analyze the words and prices in receipts. After parsing out the purchases and prices in Python, we used Levenshtein distance optimization for text classification to associate grocery purchases with nutritional information from an online database. Our algorithm utilizes Pandas to sort nutritional facts of food and determine if grocery items are healthy or unhealthy by calculating a “healthiness” factor based on calories, fats, saturates, sugars, and sodium. Ultimately, we output the amount of money spent in a given month on healthy and unhealthy food. ## Challenges We Ran Into Our product relies heavily on utilizing the capabilities of OCR APIs such as Instabase and AWS Textract to parse the receipts that we use as our dataset. While both of these APIs have been developed on finely-tuned algorithms, the accuracy of parsing from OCR was lower than desired due to abbreviations for items on receipts, brand names, and low resolution images. As a result, we were forced to dedicate a significant amount of time to augment abbreviations of words, and then match them to a large nutritional dataset. ## Accomplishments That We're Proud Of Project Horus has the capability to utilize powerful APIs from both Instabase or AWS to solve the complex OCR problem of receipt parsing. By diversifying our software, we were able to glean useful information and higher accuracy from both services to further strengthen the project itself, which leaves us with a unique dual capability. We are exceptionally satisfied with our solution’s food health classification. While our algorithm does not always identify the exact same food item on the receipt due to truncation and OCR inaccuracy, it still matches items to substitutes with similar nutritional information. ## What We Learned Through this project, the team gained experience with developing on APIS from Amazon Web Services. We found Amazon Textract extremely powerful and integral to our work of reading receipts. We were also exposed to the power of natural language processing, and its applications in bringing ML solutions to everyday life. Finally, we learned about combining multiple algorithms in a sequential order to solve complex problems. This placed an emphasis on modularity, communication, and documentation. ## The Future Of Project Horus We plan on using our application and algorithm to provide analytics on receipts from outside of the grocery industry, including the clothing, technology, wellness, education industries to improve spending decisions among the average consumers. Additionally, this technology can be applied to manage the finances of startups and analyze the spending of small businesses in their early stages. Finally, we can improve the individual components of our model to increase accuracy, particularly text classification.
## Inspiration We recognized how much time meal planning can cause, especially for busy young professionals and students who have little experience cooking. We wanted to provide an easy way to buy healthy, sustainable meals for the week, without compromising the budget or harming the environment. ## What it does Similar to services like "Hello Fresh", this is a webapp for finding recipes and delivering the ingredients to your house. This is where the similarities end, however. Instead of shipping the ingredients to you directly, our app makes use of local grocery delivery services, such as the one provided by Loblaws. The advantages to this are two-fold: first, it helps keep the price down, as your main fee is for the groceries themselves, instead of paying large amounts in fees to a meal kit company. Second, this is more eco-friendly. Meal kit companies traditionally repackage the ingredients in house into single-use plastic packaging, before shipping it to the user, along with large coolers and ice packs which mostly are never re-used. Our app adds no additional packaging beyond that the groceries initially come in. ## How We built it We made a web app, with the client side code written using React. The server was written in python using Flask, and was hosted on the cloud using Google App Engine. We used MongoDB Atlas, also hosted on Google Cloud. On the server, we used the Spoonacular API to search for recipes, and Instacart for the grocery delivery. ## Challenges we ran into The Instacart API is not publicly available, and there are no public API's for grocery delivery, so we had to reverse engineer this API to allow us to add things to the cart. The Spoonacular API was down for about 4 hours on Saturday evening, during which time we almost entirely switched over to a less functional API, before it came back online and we switched back. ## Accomplishments that we're proud of Created a functional prototype capable of facilitating the order of recipes through Instacart. Learning new skills, like Flask, Google Cloud and for some of the team React. ## What we've learned How to reverse engineer an API, using Python as a web server with Flask, Google Cloud, new API's, MongoDB ## What's next for Fiscal Fresh Add additional functionality on the client side, such as browsing by popular recipes
​​## Inspiration Donating to food banks can be challenging, especially when most rely on canned goods that leave communities without access to fresh food. Our platform changes that by allowing charities to request fresh foods and enabling users to scan their extra food to see if it's needed. Together, we **reduce**waste and help bring fresher options to those in need. ## What it does Our product first starts by loading a user into a user interface. Over here the user can see their life-time donations, their statistics on donated products, and food banks. Over on the home page, there is a button to help track new items you wish to donate. Clicking this button allows you to take a picture of a product, and using object detection, it automatically scans the item and determines how good the food item is for donation, and whether any food banks are in need of it. This makes the donation process incredibly quick and easy, letting the user decide whether the item is worth donating or not given the current demand and shelf-life and helps link individuals with charities to reduce the logistical burden on charities which prevents them from receiving fresh food. Lastly, Food bank institutions are able to request for specific items, which is then stored in the database. ## How We Built It We developed the front end using **React**, **JavaScript**, **Vite**, and **CSS** to create a smooth and interactive user experience. On the back end, we implemented **Python**, **Flask**, **Numpy**, and **PostgreSQL** for efficient data handling and API management. For the food item detection model, we utilized **YOLOv8**, training it with a custom dataset we specifically curated for maximum accuracy. Initially, we explored datasets on **Roboflow**, but found the models offered lower prediction accuracy, so we opted to fine-tune YOLOv8 for better results. To enable real-time object detection, we integrated **OpenCV**. Once an item is detected, we used a simple ID lookup in the **PostgreSQL** database to quickly search for matching items and respond to user or charity needs. ## Challenges we ran into The biggest challenge we faced was integrating the back end with the front end, specifically connecting **YOLOv8** to **React**. YOLOv8 is rarely used in combination with React, and as beginners, we had to build an API system that communicated between **Flask** and the React components. This process consumed a significant portion of the project, delaying some of the front-end features we initially planned to implement. Throughout the development, we often questioned whether the project would work at all, as we hadn't seen anything similar online, and our relative greenness left us uncertain. However, through persistence (and all-nighters), we managed to bring it all together and make it a success. ## Accomplishments that we're proud of We are proud about being able to get our webcam do live detection and especially being able to have it run on React. We are also happy how sleek the website looks considering how React was a new language for most of us and the limited time we had. The custom dataset we curated is also something we value highly as it is what gives our model the high benchmark standard of accuracy we strived for ## What we learned The most valuable lessons we learned from this project extended beyond just mastering specific technologies; we gained a deep understanding of how different technologies interact with one another. Most team members had experience in either front-end or back-end development, but not both. This project provided us with a comprehensive experience in creating integrated front-end and back-end applications. Additionally, many of us were new to **React**, making learning how it interacts with **Flask** one of our biggest learning curves. This experience not only broadened our technical skills but also opened doors to areas of development we had never explored before. Overall, this project was instrumental in enhancing our understanding of full-stack development. ## What's next for ScanForGood We plan to build more upon that donor-charity relationship, allowing for communication between both ends. It would be important as well to have direct contact with food banks so as to assist them in setting up donor accounts. This allows them to monitor and find items for donation. For personal goals, we would like to recap and document the lessons learned through making this app.
winning
## Inspiration The bitalino system is a great new advance in affordable, do-it-yourself biosignals technology. Using this technology, we want to make an application that provides an educational tool to exploring how the human body works. ## What it does Currently, it uses the ServerBIT architecture to get ECG signals from a connected bitalino and draw them in an HTML file real time using javascript. In this hack, the smoothie.js library was used instead of the jQuery flot to provide smoother plotting. ## How I built it I built the Lubdub Club using Hugo Silva's ServerBIT architecture. From that, the ECG data was drawn using smoothie.js. A lot of work was put in to make a good and accurate ECG display, which is why smoothie was used instead of flot. Other work involved adjusting for the correct ECG units, and optimizing the scroll speed and scale of the plot. ## Challenges I ran into The biggest challenge we ran into was getting the Python API to work. There are a lot more dependencies for it than is written in the documentation, but that may be because I was using a regular Python installation on Windows. I installed WinPython to make sure most of the math libraries (pylab. numpy) were installed, and installed everything else afterwards. In addition, there is a problem with server where the TCP listening will not close properly, which caused a lot of trouble in testing. Apart from that, getting a good ECG signal was very challenging, as testing was done using electrode leads on the hands, which admittedly would give a signal that is quite susceptible to interference (both from surrounding electronics and movements). ALthough we never got an ECG signal close to the ones in the demos online, we did end up with a signal that was definitely an ECG, and had recognizable PQRS phases. ## Accomplishments that I'm proud of I am proud that we were able to get the Python API working with the bitalino, as it seems that many others at Hack Western 2 were unable to. In addition, I am happy with the way the smoothie.js plot came out, and I think it is a great improvement over the original flot plot. Although we did not have time to set up a demo site, I am quite proud of the name our team came up with (lubdub.club). ## What I learned I learned a lot of Javascript, jQuery, Python, and getting ECG signals from less than optimal electrode configurations. ## What's next for Lubdub Club What's next is to implement some form of wave-signal analysis to clean up the ECG waveform, and to perform calculations to find values like heart rate. Also, I would like to make the Python API / ServerBIT easier to use (maybe rewrite from scratch or at least collect all dependencies in an installer). Other things include adding more features to the HTML site, like changing colour to match heartrate, music, and more educational content. I would like to set up lubdub.club, and maybe find a way to have the data from the bitalino sent to the cloud and then displayed on the webpage.
## Inspiration * The COVID-19 pandemic has bolstered an epidemic of anxiety among students * Frequent **panic attacks** are a symptom of anxiety * In the moment, panic attacks are frightening and crippling * In a time of isolation, Breeve is designed to improve users' mental health by identifying and helping them when they are experiencing panic attacks ## What it does * Heart rate monitor detects significant increase in heart rate (indicative of a panic attack) * Arduino sends signal to Initiate "breathing routine" * Google chrome extension (after getting a message from the Arduino) opens up a new tab with our webpage on it * Our webpage has a serene background, comforting words, and a moving cloud to help people focus on breathing and relaxing ## How we built it * The chrome extension and website are built in HTML, CSS, and JavaScript * The heart rate monitor is comprised of an Arduino UNO microcontroller, heart rate sensor (we substituted a potentiometer since we don't own a heart rate sensor) and a breadboard circuit ## Challenges we ran into * As this is our first hardware hack, we struggled with connecting the hardware and software. We were unable to use the "Keyboard()" Arduino library to let the Arduino initialize the chrome extension, and we struggled with using other technologies like FireBase to connect Arduino sensor input to the chrome extension's output. This is something we plan to learn about for future improvements to Breeve and future hackathons. ## Accomplishments that we're proud of * This is our first hardware hack! ## What we learned * Kirsten learned a lot about Arduino and breadboarding (e.g., how to hook up potentiometer) * Lavan learned about CSS animations and how a database could be used in the future to connect various input and output sources ## What's next for Breeve * More personalized → add prompt to phone a friend, take anxiety medication (if applicable) * Better sensory data (e.g. webcam, temperature sensor) to make a more informed diagnosis * Improved webpage (adding calming music in the background to create a safe, happy atmosphere)
## Inspiration We wanted to create a hack that involved IOT with a medical-centric hack where we can learn more about Arduino and backend technologies. ## What it does Detects and collects a pulse and displays a BPM measurement ## How we built it WE built it with an ESP8266(Arduino IDE) and heartbeat sensor, used C++ to program the software. ## Challenges we ran into We ran into challenges initially setting up the hardware and later ran into communication issues where we had trouble finding ways to connect the ESP8266(Arduino IDE) and communicate the collected data to firebase for use in a software interface. ## Accomplishments that we're proud of We are proud of working on new technologies that we have not used before that was out of our comfort zone and executed on them to the best of our abilities ## What we learned We learned that working with hardware can be finicky, and that working with arduino interfaces can be more difficult than we expected! ## What's next for Node MCU Heartbeat We will hopefully be able to upload our datasets to firebase or any other cloud interface so that we can utilize the data in a machine learning algorithm to detect abnormalities and inform paramedics in advance.
partial
## Inspiration When coming up with the idea for our hack, we realized that as engineering students, and specifically first year students, we all had one big common problem... time management. We all somehow manage to run out of time and procrastinate our work, because it's hard to find motivation to get tasks done. Our solution to this problem is an app that would let you make to-do lists, but with a twist. ## What it does The app will allow users to make to-do lists, but each task is assigned a number of points you can receive on completion. Earning points allows you to climb leaderboards, unlock character accessories, and most importantly, unlock new levels of a built-in game. The levels of the built in game are not too long to complete, as to not take away too much studying, but it acts as a reward system for people who love gaming. It also has a feature where you can take pictures of your tasks as your completing them, that you can share with friends also on the app, or archive for yourself to see later. The app includes a pomodoro timer to promote studying, and a forum page where you are able to discuss various educational topics with other users to further enhance your learning experience on this app. ## How we built it Our prototype was built on HTML using a very basic outline. Ideally, if we were to go further with this app, we would use a a framework such as Django or Flask to add a lot more features then this first prototype. ## Challenges we ran into We are *beginners*!! This was a first hackathon for almost all of us, and we all had very limited coding knowledge previously, so we spent a lot of time learning new applications, and skills, and didn't get much time to actually build our app. ## Accomplishments that we're proud of Learning new applications! We went through many different applications over the past 24 hours before landing on HTML to make our app with. We looked into Django, Flask, and Pygame, before deciding on HTML, so we gained some experience with these as well. ## What we learned We learned a lot over the weekend from various workshops, and hands-on personal experience. A big thing we learned is the multiple components that go into web development and how complicated it can get. This was a great insight into the world of real coding, and the application of coding that is sure to stick with us, and keep us motivated to keep teaching ourselves new things! ## What's next for Your Future Hopefully, in the future we're able to further develop Your Future to make it complete, and make it run the way we hope. This will involve a lot of time and dedication to learning new skills for us, but we hope to take that time and put in the effort to learn those skills!
# mantis-ocr A Chrome web extension that uses Microsoft Azure's computer vision API to help people who are visually impaired use images on the internet. This extension does two main things: * read the text in an image -- this is done using Microsoft Azure's OCR API * offer a brief description of what's in the image Note that instead of audio, it will return a popup containing plaintext. Almost all computers have some version of Narrator (Windows), which can read plaintext. The intention is for this narration software to be used to read the plaintext in the popup. We made this decision because people have different preferences for voice speed and sound. There is software out there that does similar things, but it either requires downloading the image first or only works on mobile. With Mantis OCR, the only thing you have to download is the extension, and, while intended for Windows, it should work across anything with Chrome and some sort of narration tool. It's designed to be as simple as possible and fit into software that most people already have. ## Getting Started ### Setup (during testing/dev phase) 1. Go to chrome://extensions and select developer mode. 2. Select "Load unpacked". When prompted, choose this directory. 3. You're all set! ## Usage Simply right-click on an image and select Mantis OCR from the menu that pops up. This will open a submenu with two options: "Read the image text" and "Get the image content". Select the one you want and a popup with either the text or brief description will appear momentarily.
## Inspiration We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing. ## What it does We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery! ## How we built it We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms. ## Challenges we ran into We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept. ..also sleep 🥲 ## Accomplishments that we're proud of We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project. ## What we learned 3 of us have never been to a hackathon before! 3 of us never used Flask before! All of us have never worked together before! From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER). ## What's Next for Handwriting Teacher Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress.
losing
# The project HomeSentry is an open source platform that turns your old phones or any devices into a distributed security camera system. Simply install our app on any mobile device and start monitoring your home (or any other place). HomeSentry gives a new life to your old devices while bringing you the peace of mind you deserve! ## Inspiration We all have old phones stored in the bottom of a drawer waiting to be used for something. This is where the inspiration for HomeSentry came from. Indeed, We wanted to give our old cellphone and electronic devices a new use so they don't only collect dust over time. Generally speaking, every cellphone have a camera that could be used for something, and we thought using it for security reason would be a great idea. Home surveillance camera systems are often very expensive, or are complicated to set up. Our solution is very simple and costs close to nothing since it's equipment you already have. ## How it works HomeSentry turns your old cellphones into a complete security system for your home. It's a modular solution where you can register as many devices as you have at your disposition. Every device are linked to your account and automatically stream their camera feed to your personnal dashboard in real time. You can view your security footage from anywhere by logging in to your HomeSentry dashboard. ## How we built it The HomeSentry platform is constituted of 3 main components : #### 1. HomeSentry Server The server main responsibility is responsible to handle all authentication requests and orchestrate cameras connections. It is indeed in charge of connecting the mobile app to users' dashboard so that it may get the live stream footage. This server is built with node.js and uses a MongoDB to store user accounts. #### 2. HomeSentry Mobile app The user goes on the app from his cellphone and enters his credentials. He may then start streaming the video from his camera to the server. The app is currently a web app build with the Angular Framework. We plan to convert it to an Android/iOS application using Apache Cordova at a later stage. #### 3. HomeSentry Dashboard The dashboar is the user main management panel. It allows the user to watch all of the streams he his receiving from the connected cellphones. The website was also built with Angular. ## Technology On a more technical note, this app uses several open sources framework and library in order to accomplish it work. Here's a quick summary. The NodeJS server is built with TypeScript, Express.JS. We use Passport.JS + MongoDB as our authentication system and SocketIO to exchange real time data between every user's devices (cameras) and the dashboard. On mobile side we are using WebRTC to access the devices' camera stream and to link it to the dashboard. Every camera stream is distributed by a peer to peer connection with the web dashboard when it become active. This ensures the streams privacy and reduce video latency. We used Peer.JS and SocketIO to implement this mecanism. Just like the mobile client, the web dashboard is built with Angular and frontend libraries such as Bootstrap or feather-icons. ## Challenges we ran into (and what we've learned) Overall we've learned that sending live streams is quite complicated ! We had underestimated the effort required to send and manage this feed. While working with this type of media, we learned how to communicate with WebRTC. At the begining, we tried to do all the stuff by oursef and use different protocols such as RTMP, but we come to a point where it was a little buggy. Late in the event, we found and used the PeerJS lib to manage those streams and it considerably simplified our code. We found that working with mobile applications like Xamarin is much more complicated for this kind of project. The easiest way was clearly javascript, and it allow a greater type of device to be registered as cameras. The project also help us improved our knowledge of real time messaging and WebSocket by using SocketIO to add a new stream without having to refresh the web page. We also used an authentication library we haven't used yet, called PassportJS for Node. With this we were able to show only the streams of a specific user. We hosted for the first time an appservice with NodeJS on Azure and we configure the CI from GitHub. It's nice to see that they use Github Action to automate this process. We've finally perfected ourselves on various frontend technologies such as Angular. ## What's next for HomeSentry HomeSentry works very well for displaying feeds of a specific user. Now, what might be cool is to add some analytics on that feed to detect motion and different events. We could send a notification of these movements by sms/email or even send a push notification if we could compile this application in Cordova and distribute it to the AppStore and Google Play. Adding the ability to record and save the feed when motion is detected could be a great addition. With detection, we should store this data in local storage and in the cloud. Working offline could also be a great addition. At last improve quality assurance to ensure that the panel works on any device will be a great idea. We believe that HomeSentry can be used in many residences and we hope this application will help people secure their homes without having to invest in expensive equipments.
## Inspiration I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts * About **4.5 trillion** cigarette butts are littered on the ground each year * 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight) * In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years. * The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter ## What it does Our robot is a modern-day Wall-E. The main objectives of the robot are to: 1. Safely drive around the sidewalks in the city 2. Detect and locate cigarette butts on the ground 3. Collect and dispose of the cigarette butts ## How we built it Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot. ### Software We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART. ### Hardware The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is: 1. Drive a pre-determined path on the sidewalk 2. Wait for the Pi Camera to detect a cigarette 3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART 4. Travel to the coordinates and retrieve the cigarette butt 5. Repeat We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls. ### Mechanical We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model. ## Challenges we ran into 1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made. 2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ... 3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates ## Accomplishments that we're proud of We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean. ## What we learned We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling. ## What's next for Cigbot There is still a lot to do for Cigbot. Below are some following examples of parts that could be added: * Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly. * Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point * Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust. * Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather. ## Sources * <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/> * <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782> * [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years).
## Inspiration Ever join a project only to be overwhelmed by all of the open tickets? Not sure which tasks you should take on to increase the teams overall productivity? As students, we know the struggle. We also know that this does not end when school ends and in many different work environments you may encounter the same situations. ## What it does tAIket allows project managers to invite collaborators to a project. Once the user joins the project, tAIket will analyze their resume for both soft skills and hard skills. Once the users resume has been analyzed, tAIket will provide the user a list of tickets sorted in order of what it has determined that user would be the best at. From here, the user can accept the task, work on it, and mark it as complete! This helps increase productivity as it will match users with tasks that they should be able to complete with relative ease. ## How we built it Our initial prototype of the UI was designed using Figma. The frontend was then developed using the Vue framework. The backend was done in Python via the Flask framework. The database we used to store users, projects, and tickets was Redis. ## Challenges we ran into We ran into a few challenges throughout the course of the project. Figuring out how to parse a PDF, using fuzzy searching and cosine similarity analysis to help identify the users skills were a few of our main challenges. Additionally working out how to use Redis was another challenge we faced. Thanks to the help from the wonderful mentors and some online resources (documentation, etc.), we were able to work through these problems. We also had some difficulty working out how to make our site look nice and clean. We ended up looking at many different sites to help us identify some key ideas in overall web design. ## Accomplishments that we're proud of Overall, we have much that we can be proud of from this project. For one, implementing fuzzy searching and cosine similarity analysis is something we are happy to have achieved. Additionally, knowing how long the process to create a UI should normally take, especially when considering user centered design, we are proud of the UI that we were able to create in the time that we did have. ## What we learned Each team member has a different skillset and knowledge level. For some of us, this was a great opportunity to learn a new framework while for others this was a great opportunity to challenge and expand our existing knowledge. This was the first time that we have used Redis and we found it was fairly easy to understand how to use it. We also had the chance to explore natural language processing models with fuzzy search and our cosine similarity analysis. ## What's next for tAIket In the future, we would like to add the ability to assign a task to all members of a project. Some tasks in projects *must* be completed by all members so we believe that this functionality would be useful. Additionally, the ability for "regular" users to "suggest" a task. We believe that this functionality would be useful as sometimes a user may notice something that is broken or needs to be completed but the project manager has not noticed it. Finally, something else that we would work on in the future would be the implementation of the features located in the sidebar of the screen where the tasks are displayed.
winning
## Inspiration Poor Economics by Abhijit Banerjee and Esther Duflo ## What it does Uses blockchain technology to build a consumer-centric peer-to-peer insurance system. ## How I built it Solidity, Flask, React, MongoDB ## Challenges I ran into Deploying solidity, integrating solidity contracts with the flask web app. ## Accomplishments that I'm proud of Everything. ## What I learned Solidity, building a web framework. ## What's next for DisasterVibes Continuing to build this for other applications. Improve oracle making it more algorithmic, implement AI algorithms and such.
## Inspiration Our inspiration stems from a fundamental realization about the critical role food plays in our daily lives. We've observed a disparity, especially in the United States, where the quality and origins of food are often overshadowed, leading to concerns about the overall health impact on consumers. Several team members had the opportunity to travel to regions where food is not just sustenance but a deeply valued aspect of life. In these places, the connection between what we eat, our bodies, and the environment is highly emphasized. This experience ignited a passion within us to address the disconnect in food systems, prompting the creation of a solution that brings transparency, traceability, and healthier practices to the forefront of the food industry. Our goal is to empower individuals to make informed choices about their food, fostering a healthier society and a more sustainable relationship with the environment. ## What it does There are two major issues that this app tries to address. The first is directed to those involved in the supply chain, like the producers, inspectors, processors, distributors, and retailers. The second is to the end user. For those who are involved in making the food, each step that moves on in the supply chain is tracked by the producer. For the consumer at the very end who will consume it, it will be a journey on where the food came from including its location, description, and quantity. Throughout its supply chain journey, each food shipment will contain a label that the producer will put on first. This is further stored on the blockchain for guaranteed immutability. As the shipment moves from place to place, each entity ("producer, processor, distributor, etc") will be allowed to make its own updated comment with its own verifiable signature and decentralized identifier (DiD). We did this through a unique identifier via a QR code. This then creates tracking information on that one shipment which will eventually reach the end consumer, who will be able to see the entire history by tracing a map of where the shipment has been. ## How we built it In order to build this app, we used both blockchain and web2 in order to alleviate some of the load onto different servers. We wrote a solidity smart contract and used Hedera in order to guarantee the immutability of the record of the shipment, and then we have each identifier guaranteed its own verifiable certificate through its location placement. We then used a node express server that incorporated the blockchain with our SQLite database through Prisma ORM. We finally used Firebase to authenticate the whole app together in order to provide unique roles and identifiers. In the front end, we decided to build a react-native app in order to support both Android and iOS. We further used different libraries in order to help us integrate with QR codes and Google Maps. Wrapping all this together, we have a fully functional end-to-end user experience. ## Challenges we ran into A major challenge that we ran into was that Hedera doesn't have any built-in support for constructing arrays of objects through our solidity contract. This was a major limitation as we had to find various other ways to ensure that our product guaranteed full transparency. ## Accomplishments that we're proud of These are some of the accomplishments that we can achieve through our app * Accurate and tamper-resistant food data * Efficiently prevent, contain, or rectify contamination outbreaks while reducing the loss of revenue * Creates more transparency and trust in the authenticity of Verifiable Credential data * Verifiable Credentials help eliminate and prevent fraud ## What we learned We learned a lot about the complexity of food chain supply. We understand that this issue may take a lot of helping hand to build out, but it's really possible to make the world a better place. To the producers, distributors, and those helping out with the food, it helps them prevent outbreaks by keeping track of certain information as the food shipments transfer from one place to another. They will be able to efficiently track and monitor their food supply chain system, ensuring trust between parties. The consumer wants to know where their food comes from, and this tool will be perfect for them to understand where they are getting their next meal to stay strong and fit. ## What's next for FoodChain The next step is to continue to build out all the different moving parts of this app. There are a lot of different directions that one person can take app toward the complexity of the supply chain. We can continue to narrow down to a certain industry or we can make this inclusive using the help of web2 + web3. We look forward to utilizing this at some companies where they want to prove that their food ingredients and products are the best.
## Inspiration The sparkling bay and rolling hills captivated us as our plane descended into the Bay Area. We were excited to see the beauty of the Golden State on the ground, but as we rode the Cal train from the SFO airport to Stanford, we saw many highway underpasses and beat-up towns littered with trash. We couldn't help but notice the stark contrast between the beautiful state of California and the poor condition of some of its communities. On that ride, we envisioned something that could bring back the beauty of the Golden State and bring others closer together through Web3. ## What it does Our platform allows others to post projects that need to be done in their community, such as picking up trash. Those who post projects are the 'Host' and can manage who works on them. 'Donors' can contribute funds to these projects to provide a financial incentive for 'Helpers' to complete the posted projects. Upon approval by the Host, the Helpers all split the money that was tied to the project. The Host uploads a description and images of the project to be done, and the Helpers upload pictures as proof that they have completed the project. ## How we built it We first architected our solution to this problem on OneNote, using user stories, domain models, and use case diagrams. After architecting, we split up into developing the backend (Ethereum contracts) and frontend (Next.js), and communicating how to integrate data. We deployed the contracts on the rollup Arbitrum due to the low cost of gas. To store the photos that Hosts and Helpers upload, we used the IPFS service Pinata. ## Challenges we ran into 1. While we had success using Esturary's Alpha UI on Friday night, we returned Saturday to connection timeout issues and problems connecting to their nodes. We then decided to switch to a more reliable and familiar Pinata for IPFS operations. 2. Most of the modern Ethereum development suite was created within the past year, leading to poor documentation and support for certain tools. This especially affects tools like Wagmi, which was created mere months ago. We had trouble finding documentation for the complex use cases we needed, such as dynamically reading contracts created from our factory design model and complex parameter rights. 3. Centering divs (this stumped ChatGPT too) architecture implementation between next.js frontend and node.js backend especially when dealing with images Testing and security of smart contracts immutable deployments ## Accomplishments that we're proud of We are proud to have produced a professional, polished product in 36 hours and for overcoming our obstacles during the short time frame. ## What we learned We learned lots of technical skills, from working with Next.js to file transport with IPFS to ETH smart contracts. ## What's next for Helping Hand We have lots of features in mind. 4. Sort all of the projects by proximity automatically when you browse the available projects. 5. Add support for other coins. 6. Add voting and delegation so that funds can be distributed according to labor. 7. Use zero-knowledge cryptography to maintain privacy.
partial
## Inspiration DermaDetect was born out of a commitment to improve healthcare equity for underrepresented and economically disadvantaged communities, including seniors, other marginalized populations, and those impacted by economic inequality. Recognizing the prohibitive costs and emotional toll of traditional skin cancer screenings, which often result in benign outcomes, we developed an open-source AI-powered application to provide preliminary skin assessments. This innovation aims to reduce financial burdens and emotional stress, offering immediate access to health information and making early detection services more accessible to everyone, regardless of their societal status. ## What it does * AI-powered analysis: Fine-tuned Resnet50 Convolutional Neural Network classifier that predicts skin lesions as benign versus cancerous by leveraging the open-source HAM10000 dataset. * Protecting patient data confidentiality: Our application uses OAuth technology (Clerk and Convex) to authenticate and verify users logging into our application, protecting patient data when users upload images and enter protected health information (PHI). * Understandable and age-appropriate information: Prediction Guard LLM technology offers clear explanations of results, fostering informed decision-making for users while respecting patient data privacy. * Journal entry logging: Using the Convex backend database schema allows users to make multiple journal entries, monitor their skin, and track moles over long periods. * Seamless triaging: Direct connection to qualified healthcare providers eliminates unnecessary user anxiety and wait times for concerning cases. ## How we built it **Machine learning model** TensorFlow, Keras: Facilitated our model training and model architecture, Python, OpenCV, Prediction Guard LLM, Intel Developer Cloud, Pandas, NumPy, Sklearn, Matplotlib **Frontend** TypeScript, Convex, React.js, Shadcn (Components), FramerMotion (Animated components), TailwindCSS **Backend** TypeScript, Convex Database & File storage, Clerk (OAuth User login authentication), Python, Flask, Vite, InfoBip (Twillio-like service) ## Challenges we ran into * We had a lot of trouble cleaning and applying the HAM10000 skin images dataset. Due to long run times, we found it very challenging to make any progress on tuning our model and sorting the data. We eventually started splitting our dataset into smaller batches and training our model on a small amount of data before scaling up which worked around our problem. We also had a lot of trouble normalizing our data, and figuring out how to deal with a large Melanocytic nevi class imbalance. After much trial and error, we were able to correctly apply data augmentation and oversampling methods to address the class imbalance issue. * One of our biggest challenges was setting up our backend Flask server. We encountered so many environment errors, and for a large portion of the time, the server was only able to run on one computer. After many Google searches, we persevered and resolved the errors. ## Accomplishments that we're proud of * We are incredibly proud of developing a working open-source, AI-powered application that democratizes access to skin cancer assessments. * Tackling the technical challenges of cleaning and applying the HAM10000 skin images dataset, dealing with class imbalances, and normalizing data has been a journey of persistence and innovation. * Setting up a secure and reliable backend server was another significant hurdle we overcame. The process taught us the importance of resilience and resourcefulness, as we navigated through numerous environmental errors to achieve a stable and scalable solution that protects patient data confidentiality. * Integrating many technologies that were new to a lot of the team such as Clerk for authentication, Convex for user data management, Prediction Guard LLM, and Intel Developer Cloud. * Extending beyond the technical domain, reflecting a deep dedication to inclusivity, education, and empowerment in healthcare. ## What we learned * Critical importance of data quality and management in AI-driven applications. The challenges we faced in cleaning and applying the HAM10000 skin images dataset underscored the need for meticulous data preprocessing to ensure AI model accuracy, reliability, and equality. * How to Integrate many different new technologies such as Convex, Clerk, Flask, Intel Cloud Development, Prediction Guard LLM, and Infobip to create a seamless and secure user experience. ## What's next for DermaDetect * Finding users to foster future development and feedback. * Partnering with healthcare organizations and senior communities for wider adoption. * Continuously improving upon data curation, model training, and user experience through ongoing research and development.
## Inspiration We want to make healthcare more accessible through our skINsight app. ## What it does Identifies skin condition using picture of affected skin area. Chatbot to get help and information on treating the skin condition. ## How we built it App using react-native and node.js. Custom classifier model with Microsoft Azure Cognitive Services Computer Vision API. Chatbot with QnA Maker API. Web crawler using python to create dataset of pictures. ## Challenges we ran into We didn't have an existing dataset to work with so we created our own! The functionality to take live picture of the suspected skin area could not be tested as the camera app does not work in Xcode simulator ## Accomplishments that we're proud of and What we Learned Learning how to make a web crawler, using Microsoft Azure Machine Learning Platform ## What's next for skINsight Integrate all components of app, and publish to app store!
# Project Inspiration Our project was inspired by the innovative work showcased at an Intel workshop, which harnessed the power of image recognition for wildfire prediction and even the identification of ancient fossils. This ingenuity sparked our desire to create a groundbreaking skin model. Our goal was to develop a AI solution that could analyze user-submitted skin photos, providing not just a diagnosis but also essential information on disease risks and potential treatments. # Overcoming Challenges Our journey was marked by challenges, with the primary hurdle being the fine-tuning of the AI model. We encountered difficulties stemming from dependencies, requiring relentless problem-solving. Additionally, we faced intermittent connectivity issues with Intel's cloud service and Jupyter Notebook, which occasionally disrupted our training process. Despite these obstacles, we remained resolute in our mission to deliver a valuable tool for the early detection of skin diseases.
partial
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
# 4Course: Revolutionizing Collaborative Learning 🏆 #### Inspiration Empowering students to excel together and innovate in the realm of education. #### What it does 🚀 4Course facilitates seamless collaboration among students, allowing them to share Google Docs links for class materials. Additionally, it offers a unique feature where students can collaborate on homework assignments for free. Using cutting-edge technology like Flow, ownership of projects can be transferred among classmates securely. Blockchain technology ensures the integrity of shared homework, preventing plagiarism by securing ownership. #### How we built it 💻 Utilizing Next.js for the frontend, Firebase for messaging, Cadence for smart contracts, and JavaScript for functionality. We leveraged AWS EC2 for initial messaging but transitioned to Firebase for scalability and efficiency. #### Challenges we ran into 🛠️ Integrating various technologies seamlessly, ensuring robust security measures, and optimizing performance were key challenges we encountered. #### Accomplishments that we're proud of Creating a platform that fosters collaboration, innovation, and academic integrity. Implementing cutting-edge technologies to deliver a user-friendly and secure experience. #### What we learned We gained valuable insights into integrating blockchain for security, optimizing frontend performance with Next.js, and effectively managing collaborative projects in a digital environment. #### What's next for 4School 📈 Continued enhancements to user experience, integration of additional collaboration tools, expansion of platform features, and partnerships with educational institutions to promote collaborative learning on a larger scale.
We drew our first kindling of inspiration from the global success of Pokemon Go as an AR gaming platform that managed to have a significant social impact. Brainstorming possible ways to retool its AR/geolocation base, we immediately realized its immense potential to be used for social good due to its ability to meaningfully engage entire populations, its novelty and distinctiveness from other social media platforms, and ability to be integrated into peoples’ everyday lives. Instead of Pokemon, obviously, we opted for simple yet likable balloons as AR markers, that would be available to people to drop, find, and engage with. We were especially interested in two specific applications of this platform: (1) a feature for people to drop balloons on a map that others could find to receive a message of positive affirmation, and (2) a feature allowing local businesses to drop coupons in areas around their businesses to both improve customer value, and to attract customers in their region to their business. It was a bit difficult to get started because we needed to find a way to combine AR with maps in a similar manner to Pokemon Go, and most of the tools we found were outdated and incompatible with more recent versions of Unity. Eventually, we decided on using Vuforia for AR due to its built-in integration with Unity, Mapbox for mapping because it seemed to be the only viable option, and Firebase for databases. We then used C# to bring these all together. We faced many challenges throughout this project, with the most frustrating challenge being the amount of time it took to download the necessary software. Additionally, it was rather difficult near the end to consolidate the AR and mapping and database code (all of which were done separately and were going to be merged together at the end), especially because the team members did it on different versions of Unity. In the end, however, we were able to overcome these software challenges and use our shared vision of bubbly positivity to develop an ambitious and socially-conscious game. While we didn't have time to implement many of the features we intended, we would still like to see this features in the future. These include special bubble coupons, browsing through past bubbles, a tutorial, and potentially a spin on the map by introducing a heart that instead gets hotter or colder based on whether the player walks closer or farther to a bubble. We also would reach out to local businesses (much like Snackpass) and gauge interest in having special business-specific bubble marketing opportunities.
partial
## 💡Inspiration💡 According to statistics, hate crimes and street violence have exponentially increased and the violence does not end there. Many oppressed groups face physical and emotional racial hostility in the same way. These crimes harm not only the victims but also people who have a similar identity. Aside from racial identities, all genders reported feeling more anxious about exploring the outside environment due to higher crime rates. After witnessing an upsurge in urban violence and fear of the outside world, We developed Walk2gether, an app that addresses the issue of feeling unsafe when venturing out alone and fundamentally alters the way we travel. ## 🏗What it does🏗 It offers a remedy to the stress that comes with walking outside, especially alone. We noticed that incorporating the option of travelling with friends lessens anxiety, and has a function to raise information about local criminal activity to help people make informed travel decisions. It also provides the possibility to adjust settings to warn the user of specific situations and incorporates heat map technology that displays red alert zones in real-time, allowing the user to chart their route comfortably. Its campaign for social change is closely tied with our desire to see more people, particularly women, outside without being concerned about being aware of their surroundings and being burdened by fears. ## 🔥How we built it🔥 How can we make women feel more secure while roaming about their city? How can we bring together student travellers for a safer journey? These questions helped us outline the issues we wanted to address as we moved into the design stage. And then we created a website using HTML/CSS/JS and used Figma as a tool to prepare the prototype. We have Used Auth0 for Multifactor Authentication. CircleCi is used so that we can deploy the website in a smooth and easy to verify pipelining system. AssemblyAi is used for speech transcription and is associated with Twilio for Messaging and Connecting Friends for the journey to destination. Twilio SMS is also used for alerts and notification ratings. We have also used Coil for Membership using Web Based Monitization and also for donation to provide better safety route facilities. ## 🛑 Challenges we ran into🛑 The problem we encountered was the market viability - there are many safety and crime reporting apps on the app store. Many of them, however, were either paid, had poor user interfaces, or did not plan routes based on reported occurrences. Also, The challenging part was coming up with a solution because there were additional features that might have been included, but we only had to pick the handful that was most critical to get started with the product. Also, Our team began working on the hack a day before the deadline, and we ran into some difficulties while tackling numerous problems. Learning how to work with various technology came with a learning curve. We have ideas for other features that we'd like to include in the future, but we wanted to make sure that what we had was production-ready and had a pleasant user experience first. ## 🏆Accomplishments that we're proud of: 🏆 We gather a solution to this problem and create an app which is very viable and could be widely used by women, college students and any other frequent walkers! Also, We completed the front-end and backend within the tight deadlines we were given, and we are quite pleased with the final outcome. We are also proud that we learned so many technologies and completed the whole project with just 2 members on the team. ## What we learned We discovered critical safety trends and pain points that our product may address. Over the last few years, urban centres have seen a significant increase in hate crimes and street violence, and the internet has made individuals feel even more isolated. ## 💭What's next for Walk2gether💭 Some of the features incorporated in the coming days would be addressing detailed crime mapping and offering additional facts to facilitate learning about the crimes happening.
## Contributors Andrea Tongsak, Vivian Zhang, Alyssa Tan, and Mira Tellegen ## Categories * **Route: Hack for Resilience** * **Route: Best Education Hack** ## Inspiration We were inspired to focus our hack on the rise of instagram accounts exposing sexual assault stories from college campuses across the US, including the Case Western Reserve University account **@cwru.survivors**; and the history of sexual assault on campuses nationwide. We wanted to create an iOS app that would help sexual assault survivors and students navigate the dangerous reality of college campuses. With our app, it will be easier for a survivor report instances of harassment, while maintaining the integrity of the user data, and ensuring that data is anonymous and randomized. Our app will map safe and dangerous areas on campus based on user data to help women, minorities, and sexual assault survivors feel protected. ### **"When I looked in the mirror the next day, I could hardly recognize myself. Physically, emotionally, and mentally."** -A submission on @cwru.survivors IG page Even with the **#MeToo movement**, there's only so much that technology can do. However, we hope that by creating this app, we will help college students take accountability and create a campus culture that can fosters learning and contributes towards social good. ### **"The friendly guy who helps you move and assists senior citizens in the pool is the same guy who assaulted me. One person can be capable of both. Society often fails to wrap its head around the fact that these truths often coexist, they are not mutually exclusive."** - Chanel Miller ## Brainstorming/Refining * We started with the idea of mapping sexual assaults that happen on college campuses. However, throughout the weekend, we were able to brainstorm a lot of directions to take the app in. * We considered making the app a platform focused on telling the stories of sexual assault survivors through maps containing quotes, but decided to pivot based on security concerns about protecting the identity of survivors, and to pivot towards an app that had an everyday functionality * We were interested in implementing an emergency messaging app that would alert friends to dangerous situations on campus, but found similar apps existed, so kept brainstorming towards something more original * We were inspired by the heat map functionality of SnapMaps, and decided to pursue the idea of creating a map that showed where users had reported danger or sexual assault on campus. With this idea, the app could be interactive for the user, present a platform for sexual assault survivors to share where they had been assaulted, and a hub for women and minorities to check the safety of their surroundings. The app would customize to a campus based on the app users in the area protecting each other ## What it does ## App Purpose * Our app allows users to create a profile, then sign in to view a map of their college campus or area. The map in the app shows a heat map of dangerous areas on campus, from areas with a lot of assaults or danger reported, to areas where app users have felt safe. * This map is generated by allowing users to anonymously submit a date, address, and story related to sexual assault or feeling unsafe. Then, the map is generated by the user data * Therefore, users of the app can assess their safety based on other students' experiences, and understand how to protect themselves on campus. ## Functions * Account creation and sign in function using **Firebox**, to allow users to have accounts and profiles * Home screen with heat map of dangerous locations in the area, using the **Mapbox SDK** * Profile screen, listing contact information and displaying the user's past submissions of dangerous locations * Submission screen, where users can enter an address, time, and story related to a dangerous area on campus ## How we built it ## Technologies Utilized * **Mapbox SDK** * **Github** * **XCode & Swift** * **Firebase** * **Adobe Illustrator** * **Google Cloud** * **Canva** * **Cocoapods** * **SurveyMonkey** ## Mentors & Help * Ryan Matsumoto * Rachel Lovell ## Challenges we ran into **Mapbox SDK** * Integrating an outside mapping service came with a variety of difficulties. We ran into problems learning their platform and troubleshooting errors with the Mapbox view. Furthermore, Mapbox has a lot of navigation functionality. Since our goal was a data map with a lot of visual effect and easy readability, we had to translate the Mapbox SDK to be usable with lots of data inputs. This meant coding so that the map would auto-adjust with each new data submission of dangerous locations on campus. **UI Privacy Concerns** * The Mapbox SDK was created to be able to pin very specific locations. However, our app deals with data points of locations of sexual assault, or unsafe locations. This brings up the concern of protecting the privacy of the people who submit addresses, and ensuring that users can't see the exact location submitted. So, we had to adjust the code to limit how far a user can zoom in, and to read as a heat map of general location, rather than pins. **Coding for non-tech users** * Our app, **viva**, was designed to be used by college students on their nights out, or at parties. The idea would be for them to check the safety of their area while walking home or while out with friends. So, we had to appeal to an audience of young people using the app in their free time or during special occasions. This meant the app would not appeal if it seemed tech-y or hard to use. So, we had to work to incorporate a lot of functionalities, and a user interface that was easy to use and appealing to young people. This included allowing them to make accounts, having an easily readable map, creating a submission page, and incorporating design elements. ## Accomplishments that we're proud of ## What we learned We learned so much about so many different aspects of coding while hacking this app. First, the majority of the people in our group had never used **Github** before, so even just setting up Github Desktop, coordinating pushes, and allowing permissions was a struggle. We feel we have mastery of Github after the project, whereas before it was brand new. Being remote, we also faced Xcode compatibility issues, to the point that one person in our group couldn't demo the app based on her Xcode version. So, we learned a lot about troubleshooting systems we weren't familiar with, and finding support forums and creative solutions. In terms of code, we had rarely worked in **Swift**, and never worked in **Mapbox SDK**, so learning how to adapt to a new SDK and integrate it while not knowing everything about the errors appearing was a huge learning experience. This involved working with .netrc files and permissions, and gave us insight to the coding aspect as well as the computers networks aspect of the project. We also learned how to adapt to an audience, going through many drafts of the UI to hit on one that we thought would appeal to college students. Last, we learned that what we heard in opening ceremony, about the importance of passion for the code, is true. We all feel like we have personally experienced the feeling of being unsafe on campus. We feel like we understand how difficult it can be for women and minorities on campus to feel at ease, with the culture of sexual predation on women, and the administration's blind eye. We put those emotions into the app, and we found that our shared experience as a group made us feel really connected to the project. Because we invested so much, the other things that we learned sunk in deep. ## What's next for Viva: an iOS app to map dangerous areas on college campuses * A stretch goal or next step would be to use the **AdaFruit Bluefruit** device to create wearable hardware, that when tapped records danger to the app. This would allow users to easily report danger with the hardware, without opening the app, and have the potential to open up other safety features of the app in the future. * We conducted a survey of college students, and 95.65% of people who responded thought our app would be an effective way to keep themselves safe on campus. A lot of them additionally requested a way to connect with other survivors or other people who have felt unsafe on campus. One responder suggested we add **"ways to stay calm and remind you that nothing's your fault"**. So, another next step would be to add forums and messaging for users, to forward our goal of connecting survivors through the platform.
Inspiration Our project is driven by a deep-seated commitment to address the escalating issues of hate speech and crime in the digital realm. We recognized that technology holds immense potential in playing a pivotal role in combating these societal challenges and nurturing a sense of community and safety. ## What It Does Our platform serves as a beacon of hope, empowering users to report incidents of hate speech and crime. In doing so, we have created a vibrant community of individuals wholeheartedly devoted to eradicating such toxic behaviors. Users can not only report but also engage with the reported incidents through posts, reactions, and comments, thereby fostering awareness and strengthening the bonds of solidarity among its users. Furthermore, our platform features an AI chatbot that simplifies and enhances the reporting process, ensuring accessibility and ease of use. ## How We Built It The foundation of our platform is a fusion of cutting-edge front-end and back-end technologies. The user interface came to life through MERN stack, ensuring an engaging and user-friendly experience. The backend infrastructure, meanwhile, was meticulously crafted using Node.js, providing robust support for our APIs and server-side operations. To house the wealth of user-generated content, we harnessed the prowess of MongoDB, a NoSQL database. Authentication and user data privacy were fortified through the seamless integration of Auth0, a rock-solid authentication solution. ## Challenges We Ran Into Our journey was not without its trials. Securing the platform, effective content moderation, and the development of a user-friendly AI chatbot presented formidable challenges. However, with unwavering dedication and substantial effort, we overcame these obstacles, emerging stronger and more resilient, ready to tackle any adversity. ## Accomplishments That We're Proud Of Our proudest accomplishment is the creation of a platform that emboldens individuals to stand up against hate speech and crime. Our achievement is rooted in the nurturing of a safe and supportive digital environment where users come together to share their experiences, ultimately challenging and combatting hatred head-on. ## What We Learned The journey was not just about development; it was a profound learning experience. We gained valuable insights into the vast potential of technology as a force for social good. User privacy, effective content moderation, and the vital role of community-building have all come to the forefront of our understanding, enhancing our commitment to addressing these critical issues. ## What's Next for JustIT The future holds exciting prospects for JustIT. We envision expanding our platform's reach and impact. Plans are underway to enhance the AI chatbot's capabilities, streamline the reporting process, and implement more robust content moderation techniques. Our ultimate aspiration is to create a digital space that is inclusive, empathetic, and, above all, safe for everyone.
partial
## Inspiration 💡 *An address is a person's identity.* In California, there are over 1.2 million vacant homes, yet more than 150,000 people (homeless population in California, 2019) don't have access to a stable address. Without an address, people lose access to government benefits (welfare, food stamps), healthcare, banks, jobs, and more. As the housing crisis continues to escalate and worsen throughout COVID-19, a lack of an address significantly reduces the support available to escape homelessness. ## This is Paper Homes: Connecting you with spaces so you can go places. 📃🏠 Paper Homes is a web application designed for individuals experiencing homelessness to get matched with an address donated by a property owner. **Part 1: Donating an address** Housing associations, real estate companies, and private donors will be our main sources of address donations. As a donor, you can sign up to donate addresses either manually or via CSV, and later view the addresses you donated and the individuals matched with them in a dashboard. **Part 2: Receiving an address** To mitigate security concerns and provide more accessible resources, Paper Homes will be partnering with California homeless shelters under the “Paper Homes” program. We will communicate with shelter staff to help facilitate the matching process and ensure operations run smoothly. When signing up, a homeless individual can provide ID, however if they don’t have any forms of ID we facilitate the entire process in getting them an ID with pre-filled forms for application. Afterwards, they immediately get matched with a donated address! They can then access a dashboard with any documents (i.e. applying for a birth certificate, SSN, California ID Card, registering address with the government - all of which are free in California). During onboarding they can also set up mail forwarding ($1/year, funded by NPO grants and donations) to the homeless shelter they are associated with. Note: We are solely providing addresses for people, not a place to live. Addresses will expire in 6 months to ensure our database is up to date with in-use addresses as well as mail forwarding, however people can choose to renew their addresses every 6 months as needed. ## How we built it 🧰 **Backend** We built the backend in Node.js and utilized express to connect to our Firestore database. The routes were written with the Express.js framework. We used selenium and pdf editing packages to allow users to download any filled out pdf forms. Selenium was used to apply for documents on behalf of the users. **Frontend** We built a Node.js webpage to demo our Paper Homes platform, using React.js, HTML and CSS. The platform is made up of 2 main parts, the donor’s side and the recipient’s side. The front end includes a login/signup flow that populates and updates our Firestore database. Each side has its own dashboard. The donor side allows the user to add properties to donate and manage their properties (ie, if it is no longer vacant, see if the address is in use, etc). The recipient’s side shows the address provided to the user, steps to get any missing ID’s etc. ## Challenges we ran into 😤 There were a lot of non-technical challenges we ran into. Getting all the correct information into the website was challenging as the information we needed was spread out across the internet. In addition, it was the group’s first time using firebase, so we had some struggles getting that all set up and running. Also, some of our group members were relatively new to React so it was a learning curve to understand the workflow, routing and front end design. ## Accomplishments & what we learned 🏆 In just one weekend, we got a functional prototype of what the platform would look like. We have functional user flows for both donors and recipients that are fleshed out with good UI. The team learned a great deal about building web applications along with using firebase and React! ## What's next for Paper Homes 💭 Since our prototype is geared towards residents of California, the next step is to expand to other states! As each state has their own laws with how they deal with handing out ID and government benefits, there is still a lot of work ahead for Paper Homes! ## Ethics ⚖ In California alone, there are over 150,000 people experiencing homelessness. These people will find it significantly harder to find employment, receive government benefits, even vote without proper identification. The biggest hurdle is that many of these services are linked to an address, and since they do not have a permanent address that they can send mail to, they are locked out of these essential services. We believe that it is ethically wrong for us as a society to not act against the problem of the hole that the US government systems have put in place to make it almost impossible to escape homelessness. And this is not a small problem. An address is no longer just a location - it's now a de facto means of identification. If a person becomes homeless they are cut off from the basic services they need to recover. People experiencing homelessness also encounter other difficulties. Getting your first piece of ID is notoriously hard because most ID’s require an existing form of ID. In California, there are new laws to help with this problem, but they are new and not widely known. While these laws do reduce the barriers to get an ID, without knowing the processes, having the right forms, and getting the right signatures from the right people, it can take over 2 years to get an ID. Paper Homes attempts to solve these problems by providing a method for people to obtain essential pieces of ID, along with allowing people to receive a proxy address to use. As of the 2018 census, there are 1.2 million vacant houses in California. Our platform allows for donors with vacant properties to allow people experiencing homelessness to put down their address to receive government benefits and other necessities that we take for granted. With the donated address, we set up mail forwarding with USPS to forward their mail from this donated address to a homeless shelter near them. With proper identification and a permanent address, people experiencing homelessness can now vote, apply for government benefits, and apply for jobs, greatly increasing their chance of finding stability and recovering from this period of instability Paper Homes unlocks access to the services needed to recover from homelessness. They will be able to open a bank account, receive mail, see a doctor, use libraries, get benefits, and apply for jobs. However, we recognize the need to protect a person’s data and acknowledge that the use of an online platform makes this difficult. Additionally, while over 80% of people experiencing homelessness have access to a smartphone, access to this platform is still somewhat limited. Nevertheless, we believe that a free and highly effective platform could bring a large amount of benefit. So long that we prioritize the needs of a person experiencing homelessness first, we will able to greatly help them rather than harming them. There are some ethical considerations that still need to be explored: We must ensure that each user’s information security and confidentiality are of the highest importance. Given that we will be storing sensitive and confidential information about the user’s identity, this is top of mind. Without it, the benefit that our platform provides is offset by the damage to their security. Therefore, we will be keeping user data 100% confidential when receiving and storing by using hashing techniques, encryption, etc. Secondly, as mentioned previously, while this will unlock access to services needed to recover from homelessness, there are some segments of the overall population that will not be able to access these services due to limited access to the internet. While we currently have focused the product on California, US where access to the internet is relatively high (80% of people facing homelessness have access to a smartphone and free wifi is common), there are other states and countries that are limited. In addition to the ideas mentioned above, some next steps would be to design a proper user and donor consent form and agreement that both supports users’ rights and removes any concern about the confidentiality of the data. Our goal is to provide means for people facing homelessness to receive the resources they need to recover and thus should be as transparent as possible. ## Sources [1](https://www.cnet.com/news/homeless-not-phoneless-askizzy-app-saving-societys-forgotten-smartphone-tech-users/#:%7E:text=%22Ninety%2Dfive%20percent%20of%20people,have%20smartphones%2C%22%20said%20Spriggs) [2](https://calmatters.org/explainers/californias-homelessness-crisis-explained/) [3](https://calmatters.org/housing/2020/03/vacancy-fines-california-housing-crisis-homeless/)
## Inspiration 1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience. ## What it does Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort. ## How I built it Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development. ## Challenges I ran into Setting up the firebase to connect to the front end react app. ## Accomplishments that I'm proud of Proud of the final look of the app/site with its clean, minimalistic design. ## What I learned The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps. ## What's next for HearMeOut We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
## Inspiration With an increasing amount of homeless people on the street there are more and more people that need food water and shelter. Our hope was to create something that helps Organizations and Charities find and help homeless people more efficiently. ## What it does The user when they spot a homeless person they simply press the button and enter a brief description of the person. It then makes a mark at that location which an organization can use to more efficiently locate and help. ## How I built it We used Java in Android Studio utilizing GoogleMaps API and FireBase ## Challenges I ran into We have never used the GoogleMaps API before or FireBase so we had some problems configuring that and it took a while to learn how to read data from FireBase ## Accomplishments that I'm proud of For never using either FireBase or GoogleMaps I'm proud of our ability to learn and debug and solve problems as they arose ## What I learned How to integrate GoogleMaps and FireBase in Android Studio ## What's next for Homely -Have the general user and organization app for better use -Have the user alerted if the person they marked was helped -integrate GoogleDirections so that you can select a marker and be led there
winning
## Inspiration Carscendo’s mission is to empower non-musical people to be creators, with a new medium for music composition. We aim to revolutionise **accessibility and fun in music creation**. *From real-world users:* **1. Learning music isn’t fun, getting good takes >10,000 hours:** Our parents forced 2 members of our team to learn a musical instrument as kids. We hated it. It simply wasn’t fun. It took us a long time to reach a level high enough to compose music. We wanted to make music composition more accessible and fun for **absolute beginners** and people who **wouldn’t normally associate music/art with themselves**. **2. Most people can drive, few can play an instrument**: The majority of adults are able to drive a vehicle, while only those that went through childhood toil can play an instrument – we saw some potential there. Can we use what people already know? To make music? To learn about music? To create? Music/playing an instrument is seen as an elitist hobby, many adults stay away from it – we want to break that assumption. **3. Dyslexic, Deaf, Blind, Paraplegia – Music Accessibility:** How can the deaf hear music? How can the Dyslexic and Blind read sheet music? How can the physically paralysed (Paraplegia) play an instrument? We wanted to design a new form of music information, a new medium of sense to convey what these groups of people lack. Inspiration from a [Taste Visualization scene](https://www.youtube.com/watch?v=xizttM_Cbuc) from The movie “Ratatouille” ## Mission Statement: **Carscendo’s is to provide a passive music compositional platform through the innovative medium of a multiplayer VR racing experience.** #### What exactly does that entail? Through a combination of environmental and inter-vehicular interactions, the way a player races is reflected through changes in 3 core musical elements: emphasis, tone, and rhythm. By playing Carscendo, users who may consider themselves not musically inclined and discouraged from using conventional composition systems are able to passively develop an intuition for the core principles of music composition while enjoying a multiplayer racing experience. #### Why a racing game? The car racing aspects not only provide an interface that people of most ages and backgrounds are familiar with as a concept (driving a car), but allow non-musicians to derive enjoyment from practice even if their musical skills are poor. By giving players this alternative outlet for enjoyment, they are encouraged to play for long enough time to develop musical intuition without even realizing it. ## Considerations **1. Crescendo’s capabilities grow with the player’s experience level as they play the game** As players improve their skills in the game, they gain greater agency over the music their ‘drive’ produces, meaning that new and experienced players alike have something foundational to enjoy together. **2. Carsendo should serve as a fun experience for anyone to play and learn from- regardless of physical ability** While the mainstream VR gaming industry is still in its early stages, we think it crucial to set the standards for inclusivity higher than they have been for conventional games. Many games assume a high level of physical and cognitive ability, thus players who may have impairments in either of those two areas either have to exert considerably more effort than a fully abled player, not be able to enjoy many facets of the experience, or not be able to play the game at all. In the design of Carscendo, we implemented the gameplay mechanics in as inclusive a way as possible. Some of these specific considerations include: * visual elements paired with any audio feedback for hard of hearing or deaf players * controls that require low levels of digit dexterity for players with tremors or joint issues * a UI system based on environment interactions rather than text for those who struggle reading, and so it can be easily understood regardless of language. ## What it does *How we improve accessibility and empower creativity & fun:* **1. More Accessible and Fun medium of Music Composition: built-into racing game mechanics:** * **Car selection** chooses a **background track.** * Driving at **different speeds** changes the **rhythm and volume** of the background track. * Collecting **power-ups** (like mario-kart) on the road, **adds melodies and tunes**. * **Looping** feature allows **remixing and deeper composing.** * **Record** feature allows **exporting of creation**. * **Multiplayer** allows **collaboration** between experienced/inexperienced friends. Driving is the mechanism, the proof of concept we are demonstrating – that music can be created in more modern ways with current technology than the old strings and drums. In the future, we want to explore other mediums/methods/mechanisms to make more people creators. **2. Dyslexic, Deaf, Blind, Paraplegia – Music Accessibility:** * **Deaf**: we represent music through stunning audio visualisations * **Dyslexic**: we represent music notes through patterns and shapes (easier to read) * **Blind**: we use haptic feedback and vibrations to aid sensory * **Physically paralysed**: we use oculus quest 2 eye tracking to allow interactions of driving ## How we built it * **VR**: Oculus Quest, Unity3D, C# * **3D Modelling & Animation:** Autodesk Maya, Adobe Photoshop, Unity3D, Blender * **UX & UI:** Figma, Unity2D, Unity3D * **Graphic Design**: Adobe Photoshop, Procreate, Adobe Illustrator * **Audio Visualisation:** Frequency analysis, Fast Fourier Transform, Koch Fractals, Audio Spectrum Analysis * **Android version**: CAD for 3D printable steering wheel with android phone housing ## Challenges we ran into Challenges: * VR Interaction physics are really difficult to write from the ground up. * Off the shelf assets vary in quality. They can be either or low or high poly and the buyer has no way of knowing before hand. Placing market assets into the scene tanked our performance and we had to find other ways to make our scene pretty. * The design of audio visualisation is just as important as the functionality. * The performance behind features can break whether it's even able to be included in final solution. We made 3 audio visualisations (for background track, for power-ups, for in-car), while running on a computer was fine – it was too heavy to include more than one for the headset. ## Accomplishments that we're proud of **For the scope of this hackathon our goal was to establish the ground work for how an unconventional medium could manipulate music while simultaneously standing alone as a racing simulator that users would enjoy playing.** We knew that creating a game of any kind in 36 hours- let alone one that had extremely innovative functionalities was an extremely ambitious task. The combination of virtual reality, real-time audio visualisation, and reactive audio based on user interactions, was a monumental goal. ## What we learned Each member of the team utilised challenging technology, and as a result learned a lot about Unity during the last 36 hours! We learned how to design, train and test a full racing simulator and audio visualisation system in Unity and build 3D models and UI elements in VR. This project really helped us have a better understanding of many of the capabilities within Oculus, and in utilising signal processing to create a new form of expression to use in an gamified setting. We learned so much through this project and from each other, and had a really great time working as a team! ## What's next for Carscendo . From here some things we want to build on on in the future include * Multiplayer Game Mode Online * Player Map development capabilities * More interactive musical control elements * Environment interactions as basis for every menu action * Real world driving - song records * Other unconventional music platforms (how might other common tasks like cooking, etc. be used to help people learn new things?
## Inspiration We really enjoy Virtual Reality and wanted to work with the Oculus DK2. Combining this with our love for music, we thought it would be interesting and fun to find a way to visualize music in a three - dimensional context. ## What it does HearVR is a program that combines Machine Learning with Virtual Reality to create an environment where SoundCloud music files can be played and visualized through a frequency spectrum and user - created comments. Music files and corresponding comments are gathered from SoundCloud and a sentiment analysis is performed on the comments. The music files are then played in a virtual three - dimensional environment where each song has a corresponding frequency spectrum and a stream of comments that are color - coordinated to represent how positive or negative it is. The user can traverse the virtual space to explore different songs. ## How we built it We wrote a Python script that uses multiprocessing to concurrently download and process SoundCloud files as it retrieves SoundCloud comments and communicates with an Azure web service that runs a sentiment analysis on the comments and gives it a score based on how positive or negative the comment is. Each music file's comments, the comments' scores, and additional information about the comments are written in a Comma Separated Value (CSV) file. The CSV file for each song is accessed by Unity, which uses C# with Visual Studio as an IDE to parse the CSV and create the corresponding frequency spectrum and comments. We then designed the virtual environment in Unity to produce a visual layout that displays the frequency spectrum and streams the comments according to their time stamp on SoundCloud for each song. ## Challenges we ran into Since we were working with VR, the technology we were using was very immature. We initially started out with Unreal Engine for our project - however, we quickly found that Unreal's audio engine was buggy and unreliable. After too many hours, we switched to Unity, a tool which none of us had worked with. Unity was a huge learning curve, but we pushed through. However, we had more issues - Unity can't decode MP3's and our plan to stream from SoundCloud was over. Instead, we did some trickery using Python to preprocess the MP3's into WAV files before feeding them to Unity. On the backend, we struggled through Microsoft Azure, which was also new technology to us. ## Accomplishments that we're proud of We're really proud of combining our interests in Machine Learning and Virtual Reality together to create a unique program that enhances user experience in listening to music. ## What we learned We learned new technologies such as Microsoft Azure, Unity, and how to integrate these different technologies together. Additionally, we learned a new language in C#. ## What's next for HearVR We would like to make HearVR a world that generates music automatically based on the person's preferences and which songs they listened to. In addition, we would like to make this a networked experience, so multiple people can listen in on the same session
## Inspiration Online typing races like Nitro Type and Type Racer can be fun activities with a group of friends or simply when procrastinating on work. The challenge of performing better than others makes players completely forget that they're actually improving their typing ability. Imagine if learning piano could be the same! By going head-to-head against other people, the player is motivated to put their all into every song they play. Additionally, instead of dreading having to continue practicing the piano, this feels like playing one more round of a game! ## What it does We turn sightreading into a game! If you keep getting notes right, your plane will fly past all the competitors who are playing the same piece of music. We track a player’s accuracy and longest streak. It can be played either with a computer keyboard or a piano keyboard hooked up by MIDI cable. ## How we built it We used Unity 2D and Unity UI elements to create the game interface, and Mirror was used for the multiplayer connection. Additionally, we used Krita to create the staff objects. Oh and tons of C#. ## Challenges we ran into **Accuracy measures** - We had lots of trouble determining how to calculate the accuracy. If someone hits *only* wrong keys, how do we penalize them for that? What if they hold the note too long, or not long enough? There were also some issues converting from beats to seconds, and as well as rounding errors when splitting the time spans into segments. ## Accomplishments that we're proud of We practically made a piano keyboard emulator from scratch! From making the note objects and staff, to logging if a key is being pressed for the correct number of beats in comparison to the note shown on screen. We had to dive deep into the MIDI file type and create classes to decompose all of the data we needed. Multiplayer! We’re quite glad that we managed to get the multiplayer aspect working, especially given that the pvp aspect is what makes the idea different from any other piano-learning concept we could find on the internet. The final result also ended up looking quite clean visually! We really enjoyed coding the clouds to randomly drift across the scene, the plane to slightly tilt during its flight path, and the camera to smoothly track the local player’s movements. ## What we learned 2 of our 4 members had no prior Unity or C# experience, and now they have new, great tools added to their tool belts. None of us had ever worked with sound files before, so it was cool to work with that for the first time. Additionally, three of our members had never worked with multiplayer connections. Lastly, it was 3 of our member’s first hackathon: they learned the invigorating, sleep-defying abilities that come with having a project we’re all excited about and a 36-hour deadline. ## What's next for Notes Besides adding more variety to our songs, here are some of the additions we would make given more time: **Data Visualization**: There are soooo many metrics we can keep track of for a user. For example dynamics, most commonly failed notes, chords, etc. Collecting these metrics and providing a snappy UI would allow users to visualize their progress over weeks, months, and more. We all love seeing ourselves improve, and numbers provide the best feedback 🙂. **A new game mode**: This mode would be focused on learning a specific piece of music instead of sight reading. In this mode, the music would stop until the player gets a key correct. Players would be a little less worried about accuracy and rhythmicality, and more about finishing the song first, while still in tempo—of course. **Class/Group Integration**: The skill of sight reading is useful for pianists at all levels of experience and teachers/groups would benefit from having a fun tool to use in order to push themselves. **Cordless Detection**- The ability to detect the sounds of the piano without any cords. It would make it much easier to just load up the game and play. **Skill Leagues**- Players could be ranked into leagues given their historical performance and then matched into lobbies with people near their same skill level.
partial
## Inspiration We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online. ## What it does Recommends sustainable and local business alternatives when shopping online. ## How we built it Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB. ## Challenges we ran into Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging. ## Accomplishments that we're proud of Creating a working product! Successful end-to-end data pipeline. ## What we learned We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB. ## What's next for Conscious Consumer First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier.
## Inspiration How many clicks does it take to upload a file to Google Drive? TEN CLICKS. How many clicks does it take for PUT? **TWO** **(that's 1/5th the amount of clicks)**. ## What it does Like the name, PUT is just as clean and concise. PUT is a storage universe designed for maximum upload efficiency, reliability, and security. Users can simply open our Chrome sidebar extension and drag files into it, or just click on any image and tap "upload". Our AI algorithm analyzes the file content and organizes files into appropriate folders. Users can easily access, share, and manage their files through our dashboard, chrome extension or CLI. ## How we built it We the TUS protocol for secure and reliable file uploads, Cloudflare workers for AI content analysis and sorting, React and Next.js for the dashboard and Chrome extension, Python for the back-end, and Terraform allow anyone to deploy the workers and s3 bucket used by the app to their own account. ## Challenges we ran into TUS. Let's prefix this by saying that one of us spent the first 18 hours of the hackathon on a golang backend then had to throw the code away due to a TUS protocol incompatibility. TUS, Cloudflare's AI suite and Chrome extension development were completely new to us and we've run into many difficulties relating to implementing and combining these technologies. ## Accomplishments that we're proud of We managed to take 36 hours and craft them into a product that each and every one of us would genuinely use. We actually received 30 downloads of the CLI from people interested in it. ## What's next for PUT If given more time, we would make our platforms more interactive by utilizing AI and faster client-server communications.
## Inspiration I got to talking with one of the participants at the pre-hackathon mixer and became intrigued about playing with commoditizing and regulating social reform. Twitter and Facebook have, unfortunately, commoditized social interaction in a sense, the Steam gaming community has commoditized gaming with its hugely expansive and current library, etc. I figured it's time to allow individuals to plot and strategize for themselves in a world where political leaders are irresponsibly pinning them between the walls! ## What it does It enters data based on three metrics, time, in years spent on your agenda, progress, on a scale of 1 to 10, 10 being complete, and expense, negative being in a revenue deficit from your campaign or journey or positive if you are generating profit, in tens of dollars. ## How I built it I loved the topological computation available through Wolfram Mathematica and Wolfram|One software, which really enabled me to see how an individual could relate their visual progress to the ideal progress of humanity, represented as a sphere whose surface area is in "progress-years-dollars per average generation globally". ## Challenges I ran into Working with the iOS SDK. There were all sorts of little nuances that kept me out of loop in terms of fast-forwarding my code, such as the selections in Xcode and the ever-changing integration of source files with iOS storyboards. ## Accomplishments that I'm proud of When I ran my simulator to the point that the flow was smooth and desirable, I was pleased. Feeling in control of the simulator gave me a sense of direction in SDK development for iOS in the Swift Programming Language. I aim to enable someone with the ability to use the app to strategize effectively based on their self-appraised socio-economic ranking. ## What I learned I learned a bit about what would work and what wouldn't work in pitching a project idea. :) ## What's next for Corroborator It will remain a hobby-project of mine for sometime and I hope individuals may benefit.
winning
## Inspiration We are very interested in the intersection of the financial sector and engineering, and wanted to find a way to speed up computation time in relation to option pricing through simulations. ## What it does Bellcrve is a distributed computing network that simulates monte carlo methods on financial instruments to perform option pricing. Using Wolfram as its power source, we are able to converge very fast. The idea was it showcase how much faster these computations become when we distribute across a network of 10 mchines, with up to 10,000 simulations running. ## How we built it We spun up 10 Virtual Machines on DigitalOcean, setup 8 as the worker nodes, 1 node for the Master, and 1 for the Scheduler to distribute the simulations across the nodes as they became free. We implemented our model using a monte carlo simulation that takes advantage of the Geometric Brownian Motion and Black Scholes Model. GBM is responsible for modeling the assets price path over the course of the simulation. We start the simulation at the stocks current price and observe how it changes as the number of steps increases. The Black Scholes Model is responsible for pricing the stocks theoretical price based on the volatility and time decay. We observed how our simulation converges between the GBM and Black Scholes model as the number of steps and iterations increases, effectively giving us a low error rate. We developed it using Wolfram, Python, Flask, Dask, React, and Next.JS, D3.JS. Wolfram and Python are responsible for most of the Monte Carlo simulations as well as the backend API and websocket. We used Dask to help manage our distributed network connecting us to our VM's in Digital Ocean. And we used React, and Next.JS to build out the web app and visualized all charts in real-time with D3.JS. Wolfram was crucial to our application being able to converge faster proving that distributing the simulations will help save resources, and speed up simulation times. We packaged up the math behind the Monte Carlo simulation and deployed it as Pypi for others to use. ## Challenges we ran into We had many challenges along the way, across all fronts. First, we had issues with the websocket trying to connect to our client side, and found out it was due to WSS issues. We then ran into some CORS errors that we were able to sort out. Our formulas kept evolving as we made progress on our application, and we had to account for this change. We realized we needed a different metric from the model and needed to shift in that direction. Setting up the cluster of machines was challenging and took some time to dig into. ## Accomplishments that we're proud of We are proud to say we pushed a completed application, and deployed it to Vercel. Our application allows users to simulate different stocks and price their options in real time and observe how it converges for different number of simulations. ## What we learned We learned a lot about websockets, creating real-time visualizations, and having our project depend on the math. This was our first time using Wolfram for our project, and we really enjoyed working with it. We have used similar languages like matlab and python, but we found Wolfram to help us speed up our computations signficantly. ## What's next for Lambda Labs We hope to continue to improve our application, and bring this to different areas in the financial sector, not just options pricing.
## Inspiration In traditional finance, banks often swap cash flows from their assets for a fixed period of time. They do this because they want to hold onto their assets long-term, but believe their counter-party's assets will outperform their own in the short-term. We decided to port this over to DeFi, specifically Uniswap. ## What it does Our platform allows for the lending and renting of Uniswap v3 liquidity positions. Liquidity providers can lend out their positions for a short amount of time to renters, who are able to collect fees from the position for the duration of the rental. Lenders are able to both hold their positions long term AND receive short term cash flow in the form of a lump sum ETH which is paid upfront by the renter. Our platform handles the listing, selling and transferring of these NFTs, and uses a smart contract to encode the lease agreements. ## How we built it We used solidity and hardhat to develop and deploy the smart contract to the Rinkeby testnet. The frontend was done using web3.js and Angular. ## Challenges we ran into It was very difficult to lower our gas fees. We had to condense our smart contract and optimize our backend code for memory efficiency. Debugging was difficult as well, because EVM Error messages are less than clear. In order to test our code, we had to figure out how to deploy our contracts successfully, as well as how to interface with existing contracts on the network. This proved to be very challenging. ## Accomplishments that we're proud of We are proud that in the end after 16 hours of coding, we created a working application with a functional end-to-end full-stack renting experience. We allow users to connect their MetaMask wallet, list their assets for rent, remove unrented listings, rent assets from others, and collect fees from rented assets. To achieve this, we had to power through many bugs and unclear docs. ## What we learned We learned that Solidity is very hard. No wonder blockchain developers are in high demand. ## What's next for UniLend We hope to use funding from the Uniswap grants to accelerate product development and add more features in the future. These features would allow liquidity providers to swap yields from liquidity positions directly in addition to our current model of liquidity for lump-sums of ETH as well as a bidding system where listings can become auctions and lenders rent their liquidity to the highest bidder. We want to add different variable-yield assets to the renting platform. We also want to further optimize our code and increase security so that we can eventually go live on Ethereum Mainnet. We also want to map NFTs to real-world assets and enable the swapping and lending of those assets on our platform.
## Inspiration: The team was inspired by bats who use echolocation to "see" the world. Bats are blind so they have to rely on sound to map the world around them to navigate the world, we are trying to use technology to enable blind people to do a similar thing. ## What it does: Echolocation uses distance sensors and spatial audio to map the space it is used in. The proximity of the user to an object in the space they are in determines the volume of that location using spatial audio. The closer the user is to an object, the higher the volume of that location using spatial audio. ## How we built it: Echolocation was built using Python, Arduino, C++, and MATLAB. MATLAB was used to transform the distance received from the distance sensors and interpret them into spatial audio. Arduino was used to convert the analog signals to digital signals for the MATLAB code. C++ was used to program and interpret the input from the distance receivers in the Arduino ## Challenges we ran into: Some of the challenges we ran into are: * Figuring out how to transform the audio played into spatial audio * Transforming the distance into the signals for the audio * Transforming the data received from the distance sensors into the appropriate signal to be sent to the - MATLAB code * Having a compact design * Adapting to the range of distance sensed by the distance sensors ## Accomplishments that we're proud of: * Transforming the audio into spatial audio * Mapping sound to originate from any point in 3d space * Getting the project to work to some degree * Creating a hardware-software pipeline including Arduino, MATLAB, and Python ## What we learned: * How to work in a team * How to utilize the strengths of each team member * How to implement real-time audio signal processing ## What's next for Echolocation: The sensors used for echolocation are very limited and do not map a 3d space, to improve on this technology, a more detailed sensor system utilizing a point cloud environment would need to be used and the program would be modified to give more accurate eyes. Furthermore, we could implement AI and machine learning to train models to further support not only the surrounding but also objects.
winning
## Inspiration The inspiration behind Smart Passcode stemmed from the desire to enhance accessibility and security for visually impaired individuals, empowering them to independently access secure spaces with ease. ## What it does In seek of devising a smart home and accessibility device that enhances access to a safe, our device "Smart Passcode" allows the visually impaired to use one hand to physically list the key digits. This convenient alternative omits having those are unsighted to be less ## How we built it We built Smart Passcode using a combination of machine learning techniques, Python programming for algorithm development, and microcontroller technology for hardware integration. This program required pulling libraries from OpenCV and training the model to recognize hand patterns and shapes that correlate with a specified number. ## Challenges we ran into One challenge we encountered was optimizing the machine learning algorithms to accurately interpret finger movements consistently and reliably. This was specifically because of the effect that differences in lighting, camera placement and time durations between finger movement placed on the learning model. Another challenge that we encountered was the implementation of a Rasberry Pi to make our device and system portable. We were unable to install large libraries such as TensorFlow onto the Rasberry Pi as it would take multiple hours. This limitation made us re-think our project idea and shifted towards using an arduino with a laptop for the image processing. ## Accomplishments that we're proud of We're proud to have developed a solution that seamlessly integrates accessibility and security, providing visually impaired individuals with a reliable and efficient means of accessing secure spaces independently. Furthermore we are proud of developing a computer vision model and training the model with over 600 images. ## What we learned Through the development of Smart Passcode, we gained valuable insights into the intersection of accessibility technology, machine learning, and hardware integration. We also deepened our understanding of the unique challenges faced by visually impaired individuals in everyday tasks. Prior to coming to MakeUofT no group members had any experience using a Rasberry Pi, we started from scratch learning how to boot strap it. ## What's next for Smart Passcode In the future, we aim to further refine the device's functionality, explore additional security features, and expand its compatibility with various safe systems and environments. Additionally, we plan to seek feedback from users to continuously improve the device's usability and effectiveness.
## Inspiration What inspired us was we wanted to make an innovative solution which can have a big impact on people's lives. Most accessibility devices for the visually impaired are text to speech based which is not ideal for people who may be both visually and auditorily impaired (such as the elderly). To put yourself in someone else's shoes is important, and we feel that if we can give the visually impaired a helping hand, it would be an honor. ## What it does The proof of concept we built is separated in two components. The first is an image processing solution which uses OpenCV and Tesseract to act as an OCR by having an image input and creating a text output. This text would then be used as an input to the second part, which is a working 2 by 3 that converts any text into a braille output, and then vibrate specific servo motors to represent the braille, with a half second delay between letters. The outputs were then modified for servo motors which provide tactile feedback. ## How we built it We built this project using an Arduino Uno, six LEDs, six servo motors, and a python file that does the image processing using OpenCV and Tesseract. ## Challenges we ran into Besides syntax errors, on the LED side of things there were challenges in converting the text to braille. Once that was overcome, and after some simple troubleshooting for menial errors, like type comparisons, this part of the project was completed. In terms of the image processing, getting the algorithm to properly process the text was the main challenge. ## Accomplishments that we're proud of We are proud of having completed a proof of concept, which we have isolated in two components. Consolidating these two parts is only a matter of more simple work, but these two working components are the fundamental core of the project we consider it be a start of something revolutionary. ## What we learned We learned to iterate quickly and implement lateral thinking. Instead of being stuck in a small paradigm of thought, we learned to be more creative and find alternative solutions that we might have not initially considered. ## What's next for Helping Hand * Arrange everything in one android app, so the product is cable of mobile use. * Develop neural network so that it will throw out false text recognitions (usually look like a few characters without any meaning). * Provide API that will be able to connect our glove to other apps, where the user for example may read messages. * Consolidate the completed project components, which is to implement Bluetooth communication between a laptop processing the images, using OpenCV & Tesseract, and the Arduino Uno which actuates the servos. * Furthermore, we must design the actual glove product, implement wire management, an armband holder for the uno with a battery pack, and position the servos.
## Inspiration In a world in which we all have the ability to put on a VR headset and see places we've never seen, search for questions in the back of our mind on Google and see knowledge we have never seen before, and send and receive photos we've never seen before, we wanted to provide a way for the visually impaired to also see as they have never seen before. We take for granted our ability to move around freely in the world. This inspired us to enable others more freedom to do the same. We called it "GuideCam" because like a guide dog, or application is meant to be a companion and a guide to the visually impaired. ## What it does Guide cam provides an easy to use interface for the visually impaired to ask questions, either through a braille keyboard on their iPhone, or through speaking out loud into a microphone. They can ask questions like "Is there a bottle in front of me?", "How far away is it?", and "Notify me if there is a bottle in front of me" and our application will talk back to them and answer their questions, or notify them when certain objects appear in front of them. ## How we built it We have python scripts running that continuously take webcam pictures from a laptop every 2 seconds and put this into a bucket. Upon user input like "Is there a bottle in front of me?" either from Braille keyboard input on the iPhone, or through speech (which is processed into text using Google's Speech API), we take the last picture uploaded to the bucket use Google's Vision API to determine if there is a bottle in the picture. Distance calculation is done using the following formula: distance = ( (known width of standard object) x (focal length of camera) ) / (width in pixels of object in picture). ## Challenges we ran into Trying to find a way to get the iPhone and a separate laptop to communicate was difficult, as well as getting all the separate parts of this working together. We also had to change our ideas on what this app should do many times based on constraints. ## Accomplishments that we're proud of We are proud that we were able to learn to use Google's ML APIs, and that we were able to get both keyboard Braille and voice input from the user working, as well as both providing image detection AND image distance (for our demo object). We are also proud that we were able to come up with an idea that can help people, and that we were able to work on a project that is important to us because we know that it will help people. ## What we learned We learned to use Google's ML APIs, how to create iPhone applications, how to get an iPhone and laptop to communicate information, and how to collaborate on a big project and split up the work. ## What's next for GuideCam We intend to improve the Braille keyboard to include a backspace, as well as making it so there is simultaneous pressing of keys to record 1 letter.
losing
## Inspiration We had a chat with the engineers at the Jump booth and brainstormed some ideas, particularly in regards to building an explorer for Wormhole cross-chain events. Additionally, since wormhole has a limit for the amount of value allowed to be bridged in a given time (as an additional protection after the hack earlier this year), we wanted to create an estimator for the likelihood a transaction would succeed. This is important especially for cross-chain arbitrage since transactions that are not sucessfully approved could become irreversibly locked up in wormhole for 24 hours. ## What it does Our project aims to provide greater insight into underlying Wormhole events and help users decide estimate the likihood of their transaction succeeding. Using a spy relay and the public Guardian APIs, we created a system that displays both a real-time tally of Guardian signatures and an success estimator for transactions that takes in chainid and notional value. The real-time graph displays a rolling-window live tally of the Guardians that have signed the past 1000 VAAs. This way, users can easily visualize network reliability and see that the guardians with little to no recent transactions are probably offline. Additionally, by inputting a notional amount and chain id, users can also estimate if their transaction will be successful based on guardian availablity and single transaction notional amount. ## How we built it We started by building an MVP for the transaction success estimator and concurrently worked on getting guardian data feeds directly from the gossip network. We ran a spy relay in a Digital Ocean droplet, which we used to provide a live VAA broadcast feed to our backend via RPC. Our backend then provides the frontend, which we built in React, with a REST endpoint to query the latest VAA signature counts on a rolling window basis. The information required for our transaction success estimator was pulled in from the public endpoints from available guardians. ## Challenges we ran into Cross-chain Wormhole transactions must be signed off by 13 of 19 "guardians" to be considered valid. Thus, we needed information from all 19 guardians to reliably predict whether or not cross-chain transactions will fail. However, only 7 of the 19 guardians provide public APIs, so the rest needed to be collected directly from the Gossip network that the guardians broadcast to. This posed a significant challenge for us, since the current "spy" relay implemtation supports only listening for VAA (Verified Action Approvals) and not the Heartbeat events that we were interested in. We compromised by using only the publically available APIs for the transaction success estimator and supplementing it with live feeds from the VAA spy relay. ## Future We managed to get something useful(maybe?) out there for everyone using the wormhole bridge. Also, we managed to sneak in another hack overnight and in while we ran into issues with both the lack of documentation on the wormhole + guardian code. In the future, we plan on adding additional functionality to the network if we see people use the mvp. There were things like mempool monitoring and gossip networks that could be done if there was more time.
## Inspiration We are very interested in the intersection of the financial sector and engineering, and wanted to find a way to speed up computation time in relation to option pricing through simulations. ## What it does Bellcrve is a distributed computing network that simulates monte carlo methods on financial instruments to perform option pricing. Using Wolfram as its power source, we are able to converge very fast. The idea was it showcase how much faster these computations become when we distribute across a network of 10 mchines, with up to 10,000 simulations running. ## How we built it We spun up 10 Virtual Machines on DigitalOcean, setup 8 as the worker nodes, 1 node for the Master, and 1 for the Scheduler to distribute the simulations across the nodes as they became free. We implemented our model using a monte carlo simulation that takes advantage of the Geometric Brownian Motion and Black Scholes Model. GBM is responsible for modeling the assets price path over the course of the simulation. We start the simulation at the stocks current price and observe how it changes as the number of steps increases. The Black Scholes Model is responsible for pricing the stocks theoretical price based on the volatility and time decay. We observed how our simulation converges between the GBM and Black Scholes model as the number of steps and iterations increases, effectively giving us a low error rate. We developed it using Wolfram, Python, Flask, Dask, React, and Next.JS, D3.JS. Wolfram and Python are responsible for most of the Monte Carlo simulations as well as the backend API and websocket. We used Dask to help manage our distributed network connecting us to our VM's in Digital Ocean. And we used React, and Next.JS to build out the web app and visualized all charts in real-time with D3.JS. Wolfram was crucial to our application being able to converge faster proving that distributing the simulations will help save resources, and speed up simulation times. We packaged up the math behind the Monte Carlo simulation and deployed it as Pypi for others to use. ## Challenges we ran into We had many challenges along the way, across all fronts. First, we had issues with the websocket trying to connect to our client side, and found out it was due to WSS issues. We then ran into some CORS errors that we were able to sort out. Our formulas kept evolving as we made progress on our application, and we had to account for this change. We realized we needed a different metric from the model and needed to shift in that direction. Setting up the cluster of machines was challenging and took some time to dig into. ## Accomplishments that we're proud of We are proud to say we pushed a completed application, and deployed it to Vercel. Our application allows users to simulate different stocks and price their options in real time and observe how it converges for different number of simulations. ## What we learned We learned a lot about websockets, creating real-time visualizations, and having our project depend on the math. This was our first time using Wolfram for our project, and we really enjoyed working with it. We have used similar languages like matlab and python, but we found Wolfram to help us speed up our computations signficantly. ## What's next for Lambda Labs We hope to continue to improve our application, and bring this to different areas in the financial sector, not just options pricing.
We present a blockchain agnostic system for benchmarking smart contract execution times. To do this we designed a simple programming language capable of running small performance benchmarks. We then implemented an interpreter for that language on the Ethereum, Solana, and Polkadot blockchains in the form of a smart contract. To perform a measurement we then submit the same program to each chain and time its execution. Deploying new smart contracts is expensive and learning the tooling and programming languages required for their deployment is time consuming. This makes a single blockchain agnostic language appealing for developers as it cuts down on cost and time. It also means that new blockchains can be added later and all of the existing tests easily run after the deployment of a single smart contract. You can think of this as "a JVM for performance measurements." To demonstrate how this can be used to measure non-blockchain runtimes we also implemented an interpreter on Cloudflare Workers and present some benchmarks of that. Cloudflare Workers was an order of magnitude faster than the fastest blockchain we tested. Our results show that network and mining time dominate smart contract execution time. Despite considerable effort we were unable to find a program that notably impacted the execution time of a smart contract while remaining within smart contract execution limits. These observations suggest three things: 1. Once a smart contract developer has written a functional smart contract there is little payoff to optimizing the code for performance as network and mining latency will dominate. 2. Smart contract developers concerned about performance should look primarily at transaction throughput and latency when choosing a platform to deploy their contracts. 3. Even blockchains like Solana which bill themselves as being high performance are much, much slower than their centralized counterparts. ### Results We measured the performance of three programs: 1. An inefficient, recursive fibonacci number generator computing the 12th fibonacci number. 2. A program designed to "thrash the cache" by repeatedly making modifications to dispirate memory locations. 3. A simple program consisting of two instructions to measure cold start times In addition to running these programs on our smart contracts we also wrote a runtime on top of Cloudflare Workers as a point of comparison. Like these smart contracts Cloudflare Workers run in geographically distributed locations and feature reasonably strict limitations on runtime resource consumption. To compute execution time we measured the time between when the transaction to run the start contract was sent and when it was confirmed by the blockchain. Due to budgetary constraints our testing was done on test networks. We understand that this is an imperfect proxy for actual code execution time. Due to determinism requirements on all of the smart contract platforms that we used, access to the system time is prohibited to smart contracts. This makes measuring actual code execution time difficult. Additionally as smart contracts are executed and validated on multiple miners it is not clear what a measurement of actual code execution time would mean. This is an area that we would like to explore further given the time. In the meantime we imagine that most users of a smart contract benchmarking system care primarily about total transaction time. This is the time delay that users of their smart contracts will experience and also the time that we measure. ![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/760/datas/original.png) Our results showed that Solana and Polkadot significantly outperformed Ethereum with Solana being the fastest blockchain we measured. ### Additional observations While Solana was faster than Polkadot and Ethereum in our benchmarks it also had the most restrictive computational limits. The plot below shows the largest fibonacci number computable on each blockchain before computational limits were exceeded. Once again we include Cloudflare Workers as a non-blockchain baseline. ![](https://challengepost-s3-challengepost.netdna-ssl.com/photos/production/software_photos/001/714/761/datas/original.png) ### The benchmarking language To provide a unified interface for performance measurements we have designed and implemented a 17 instruction programming language called Arcesco. For each platform we then implement a runtime for Arcesco and time the execution of a standard suite of programs. Each runtime takes assembled Arcesco bytecode through stdin and prints the execution result to stdout. An example invocation might look like this: ``` cat program.bc | assembler | runtime ``` This unified runtime interface means that very different runtimes can be plugged in and run the same way. As testament to the simplicity of runtime implementations we were able to implement five different runtimes over the course of the weekend. Arcesco is designed as a simple stack machine which is as easy as possible to implement an interpreter for. An example Arcesco program that computes the 10th fibonacci number looks like this: ``` pi 10 call fib exit fib: copy pi 3 jlt done copy pi 1 sub call fib rot 1 pi 2 sub call fib add done: ret ``` To simplify the job of Arcesco interpreters we have written a very simple bytecode compiler for Arcesco which replaces labels with relative jumps and encodes instructions into 40 bit instructions. That entire pipeline for the above program looks like this: ``` text | assembled | bytecode ----------------|---------------|-------------------- | | pi 10 | pi 10 | 0x010a000000 call fib | call 2 | 0x0e02000000 exit | exit | 0x1100000000 fib: | | copy | copy | 0x0200000000 pi 3 | pi 3 | 0x0103000000 jlt done | jlt 10 | 0x0b0a000000 copy | copy | 0x0200000000 pi 1 | pi 1 | 0x0101000000 sub | sub | 0x0400000000 call fib | call -6 | 0x0efaffffff rot 1 | rot 1 | 0x0d01000000 pi 2 | pi 2 | 0x0102000000 sub | sub | 0x0400000000 call fib | call -10 | 0x0ef6ffffff add | add | 0x0300000000 done: | | ret | ret | 0x0f00000000 | | ``` Each bytecode instruction is five bytes. The first byte is the instructions opcode and the next four are its immediate. Even instructions without immediates are encoded this way to simplify instruction decoding in interpreters. We understand this to be a small performance tradeoff but as much as possible we were optimizing for ease of interpretation. ``` 0 8 40 +--------+-------------------------------+ | opcode | immediate | +--------+-------------------------------+ ``` The result of this is that an interpreter for Arcesco bytecode is just a simple while loop and switch statement. Each bytecode instruction being the same size and format makes decoding instructions very simple. ``` while True: switch opcode: case 1: stack.push(immediate) break # etc.. ``` This makes it very simple to implement an interpreter for Arcesco bytecode which is essential for smart contracts where larger programs are more expensive and less auditable. A complete reference for the Arcesco instruction set is below. ``` opcode | instruction | explanation ----------------------------------- 1 | pi <value> | push immediate - pushes VALUE to the stack 2 | copy | duplicates the value on top of the stack 3 | add | pops two values off the stack and adds them pushing the result back onto the stack. 4 | sub | like add but subtracts. 5 | mul | like add but multiplies. 6 | div | like add but divides. 7 | mod | like add but modulus. 8 | jump <label> | moves program execution to LABEL 9 | jeq <label> | moves program execution to LABEL if the two two stack values are equal. Pops those values from the stack. 10 | jneq <label> | like jeq but not equal. 11 | jlt <label> | like jeq but less than. 12 | jgt <label> | like jeq but greater than. 13 | rot <value> | swaps stack item VALUE items from the top with the stack item VALUE-1 items from the top. VALUE must be >= 1. 14 | call <label> | moves program execution to LABEL and places the current PC on the runtime's call stack 15 | ret | sets PC to the value on top of the call stack and pops that value. 16 | pop | pops the value on top of the stack. 17 | exit | terminates program execution. The value at the top of the stack is the program's return value. ``` ### Reflections on smart contract development Despite a lot of hype about smart contracts we found that writing them was quite painful. Solana was far and away the most pleasant to work with as its `solana-test-validator` program made local development easy. Solana's documentation was also approachable and centralized. The process of actually executing a Solana smart contract after it was deployed was very low level and required a pretty good understanding of the entire stack before it could be done. Ethereum comes in at a nice second. The documentation was reasonably approachable and the sheer size of the Ethereum community meant that there was almost too much information. Unlike Solana though, we were unable to set up a functional local development environment which meant that the code -> compile -> test feedback loop was slow. Working on Ethereum felt like working on a large C++ project where you spend much of your time waiting for things to compile. Polkadot was an abject nightmare to work with. The documentation was massively confusing and what tutorials did exist failed to explain how one might interface with a smart contract outside of some silly web UI. This was surprising given that Polkadot has a $43 billion market cap and was regularly featured in "best smart contract" articles that we read at the beginning of this hackathon. We had a ton of fun working on this project. Externally, it can often be very hard to tell the truth from marketing fiction when looking in the blockchain space. It was fun to dig into the technical details of it for a weekend. ### Future work On our quest to find the worst performing smart contract possible, we would like to implement a fuzzer that integrates with Clockchain to generate adversarial bytecode. We would also like to explore the use of oracles in blockchains for more accurate performance measurements. Finally, we would like to flesh out our front-end to be dynamically usable for a wide audience.
winning
## Inspiration We have all been there, stuck on a task, with no one to turn to for help. We all love wikiHow but there isn't always a convenient article there for you to follow. So we decided to do something about it! What if we could leverage the knowledge of the entire internet to get the nicely formatted and entertaining tutorials we need. That's why we created wikiNow. With the power of Cohere's natural language processing and stable diffusion, we can combine the intelligence of millions of people to get the tutorials we need. ## What it does wikiNow is a tool that can generate entire wikiHow articles to answer any question! A user simply has to enter a query and our tool will generate a step-by-step article with images that provides a detailed answer tailored to their exact needs. wikiNow enables users to find information more efficiently and to have a better understanding of the steps involved. ## How we built it wikiNow was built using a combination of Cohere's natural language processing and stable diffusion algorithms. We trained our models on a large dataset of existing wikiHow articles and used this data to generate new articles and images that are specific to the user's query. The back-end was built using Flask and the front-end was created using Next.js. ## Challenges we ran into One of the biggest challenges we faced was engineering the prompts that would generate the articles. We had to experiment with a lot of different methods before we found something that worked well with multi-layer prompts. Another challenge was creating a user interface that was both easy to use and looked good. We wanted to make sure that the user would be able to find the information they need without being overwhelmed by the amount of text on the screen. Properly dealing with Flask concurrency and long-running network requests was another large challenge. For an average wiki page creation, we require ~20 cohere generate calls. In order to make sure the wiki page returns in a reasonable time, we spent a considerable amount of time developing asynchronous functions and multi-threading routines to speed up the process. ## Accomplishments that we're proud of We're proud that we were able to create a tool that can generate high-quality articles. We're also proud of the user interface that we created, which we feel is both easy to use and visually appealing. The generated articles are both hilarious and informative, which was our main goal. We are also super proud of our optimization work. When running in a single thread synchronously, the articles can take up to *5 minutes* to generate. We have managed to bring that down to around **30 seconds**, which a near 10x improvement! ## What we learned We learned a lot about using natural language processing and how powerful it can be in real world applications. We also learned a lot about full stack web development. For two of us, this was our first time working on a full stack web application, and we learned a lot about running back-end servers and writing a custom API's. We solved a lot of unique optimization and threading problems as well which really taught us a lot. ## What's next for wikiNow In the future, we would like to add more features to wikiNow, such as the ability to generate articles in other languages and the ability to generate articles for other types of content, such as recipes or instructions. We would also like to make the articles more interactive so that users can ask questions and get clarification on the steps involved. It would also be handy to add the ability to cache previous user generated articles to make it easier for the project to scale without re-generating existing articles.
## Inspiration 💡 Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized. ## What it does 🎮 Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users. ## How we built it 🔨 We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project. To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client. ## Challenges we ran into 🚩 One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries. ## Accomplishments that we're proud of ⭐ One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories. ## What we learned 📚 Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries. We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server. Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project. ## What's next for Dream.ai 🚀 There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience. Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together. Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users.
## Inspiration We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing. ## What it does We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery! ## How we built it We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms. ## Challenges we ran into We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept. ..also sleep 🥲 ## Accomplishments that we're proud of We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project. ## What we learned 3 of us have never been to a hackathon before! 3 of us never used Flask before! All of us have never worked together before! From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER). ## What's Next for Handwriting Teacher Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress.
partial
![BulletinVR Logo](https://github.com/k3vnchen/bulletin-webvr/blob/master/project/static/assets/blong_large.png?raw=true) ## Inspiration Social anxiety affects hundreds of thousands of people and can negatively impact social interaction and mental health. Around campuses and schools, we were inspired by bulletin boards with encouraging anonymous messages, and we felt that these anonymous message boards were an inspiring source of humanity. With Bulletin, we aim to bring this public yet anonymous way of spreading words of wisdom to as many people as possible. Previous studies have even shown that online interaction decreased social anxiety in people with high levels of anxiety or depression. ## What it does Bulletin is a website for posting anonymous messages. Bulletin's various boards are virtual reality spaces for users to enter messages. Bulletin uses speech-to-text to create a sense of community within the platform, as everything you see has been spoken by other users. To ensure anonymity, Bulletin does not store any of its users data, and only stores a number of recent messages. Bulletin uses language libraries to detect and filter negative words and profanity. To try Bulletin (<https://bulletinvr.online>), simply enter one of the bulletin boards and double tap or press the enter key to start recording your message. ![Screenshot of Bulletin in a VR HMD](https://github.com/k3vnchen/bulletin-webvr/blob/master/project/static/img/screenshot1.png?raw=true) ## What is WebVR? WebVR, or Web-based virtual reality, allows users to experience a VR environment within a web browser. As a WebVR app, Bulletin can also be accessed on the Oculus Rift, Oculus Go, HTC Vive, Windows Mixed Reality, Samsung Gear VR, Google Cardboard, and your computer or mobile device. As the only limit is having an internet connection, Bulletin is available to all and seeks to bring people together through the power of simple messages. ## How we built it We use the A-Frame JavaScript framework to create WebVR experiences. Voice recognition is handled with the HTML Speech Recognition API. The back-end service is written in Python. Our JS scripts use AJAX to make requests to the Flask-powered server, which queries the database and returns the messages that the WebVR front-end should display. When the user submits a message, we run it through the Python `fuzzy-wuzzy` library, which uses the Levenshtein metric to make sure it is appropriate and then save it to the database. ## Challenges we ran into **Integrating A-Frame with our back-end was difficult**. A-Frame is simple of itself to create very basic WebVR scenes, but creating custom JavaScript components which would communicate with the Flask back-end proved time-consuming. In addition, many of the community components we tried to integrate, such as an [input mapping component](https://github.com/fernandojsg/aframe-input-mapping-component), were outdated and had badly-documented code and installation instructions. Kevin and Hamilton had to resort to reading GitHub issues and pull requests to get some features of Bulletin to work properly. ## Accomplishments that we're proud of We are extremely proud of our website and how our WebVR environment turned out. It's exceeded all expectations, and features such as multiple bulletin boards and recording by voice were never initially planned, but work consistently well. Integrating the back-end with the VR front-end took time, but was extremely satisfying; when a user sends a message, other users will near-instantaneously see their bulletin update. We are also proud of using a client-side speech to text service, which improves security and reduces website bandwith and allows for access via poor internet connection speeds. Overall, we're all proud of building an awesome website. ## What we learned Hamilton learned about the A-Frame JavaScript library (and JavaScript itself), which he had no experience with previously. He developed the math involved with rendering text in the WebVR environment. Mykyta and Kevin learned how to use the HTML speech to text API and integrate the WebVR scenes with the AJAX server output. Brandon learned to use the Google App Engine to host website back-ends, and learned about general web deployment. ## What's next for Bulletin We want to add more boards to Bulletin, and expand possible media to also allowing images to be sent. We're looking into more sophisticated language libraries to try and better block out hate speech. Ultimately, we would like to create an adaptable framework to allow for anyone to include a private Bulletin board in their own website.
## Inspiration No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience. ## What it does We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that! ## How we built it We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma. ## Challenges we ran into Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well. ## Accomplishments that we're proud of Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain. ## What we learned We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable.
## Inspiration Our focus is to connect people around the globe by building a platform that hosts conferences in virtual reality. Imagine the convenience of meeting anyone, from anywhere, and at any time -- all from the comfort of your home or office. Large, popular, and influential events like Google I/O, Grace Hopper, etc. often get sold out before everyone who wants to can purchase a ticket. Our infinitely scalable platform eliminates this problem. We disrupt the constraints of the physical world (space, time), and open up the opportunity for you to connect with your stakeholders and audience in an enhanced reality. We see virtual reality as an elegant solution to not only language barriers, but also the inefficiencies, expenses, and discomforts of physically traveling to interact with others. The potential of our project is as scalable as our platform. Virtual reality is not only a solution, it is the future. What’s most exciting is that our platform opens up a new marketplace for businesses around the world to connect, network, and make an impact. ## What it does Full conference and live communication in virtual reality from anywhere in the world! In our prototype, we've created a lobby, a webinar room, and a conference room where participants can talk and engage in real time. # Significant Problems Addressed: Discriminatory barriers in terms of social, geographic, and financial conditions. High costs with regard to travel time and conference cost Limited participants ## What we learned This was a very ambitious project for our team -- especially considering that none of us have worked with VR before. Each and every one of us had to overcome steep learning curves in order to work with multiple different technologies with which we were not familiar. Main things we learned: -Working with Unity -Multiplayer communication and networking -3D design for virtual reality # Problems we ran into: We ran into a lot of incompatibility problems, as our project involves several different pieces of technology. So we had to think outside the box in order to make our devices compatible. -First, our Alienware laptops had an embedded graphic card that is incompatible with the current version of the Oculus runtime. With that, we had to hack our way to downgrade the current run time on our computer, edit some of the registry logs, and disable error reporting in order for the Oculus to be compatible. -Finding an ultimate back-end platform for our multiplayer platform and enabling voice so that everyone inside our platform can talk to each other was also a challenge. -When we tried to set up the speech to text, text to speech API, our backend platform did not accept sound so we had to convert our audio clip’s 0.5 second frame -> convert to float -> convert to byte -> send it to the rest of the players of the network -> converted back to float -> converted back to sound so there was a little delay when it come to receiving the sound. -Setting up the Oculus and getting it to display properly along with the position tracker was extremely challenging. ## What's next for ConferenceVR Web Interface for corporations, schools, etc. to customize their conference area. Live speech translation to allow for language-independent global communication. We hope to add in more features such as Facebook authorization, having profile pictures that are parsed from Facebook, population/attendance for data visualization, and much more. There is no end to the number of new features that can be integrated with our platform.
partial
## Inspiration During modern times, the idea underlying a facemask is simple--if more people wear them, less people will get sick. And while it holds true, this is an oversimplification: the number of lives saved is dependent not only on the quantity, but also on the quality of the masks which people wear (as evidenced by recent research by the CDC). However, due to an insufficient supply of N95 masks, healthcare workers are forced to wear cloth or surgical masks which both leak from the sides, increasing the risk of infection, and are arduous to breathe through for extended physical exertion. ## What it does Maskus is the first mask bracket and fitter in one - custom-fitted and printed using accessible technology. It is designed to improve the baseline quality of facemasks around the world, with its first and most pressing use is for healthcare workers. The user starts taking a picture of their face through their computer/smartphone camera. We then generate an accurate 3D representation of the user's face and design a tight-fitting 3D printable mask bracket specifically tailored to the user's face contours. Within seconds, we can render the user's custom mask onto the user's face in augmented reality in realtime. The user can then either download their custom mask in a format ready for 3D printing, or set up software to print the mask automatically. We also have an Arduino Nano that alerts the user if the mask is secured properly, or letting them now it needs to be readjusted. ## How we built it After the user visits the Maskus website, our React frontend sends a POST request to a Python Flask backend. The server receives the image, decodes it, and feeds it into a state of the art machine learning 3D face reconstruction model (3DDFA). The resultant 3D face model then goes through some preprocessing, which compresses the 3D data to improve performance. Another script then extracts the user's face contour/outline from the 3D model and builds a custom mask bracket with programmable CAD software. On the web app, the user gets to see both their own 3D face mesh as well as an AR rendering of the custom fitted mask onto their face (using React and three.js). Lastly, this data is saved to a standard 3D printing file format (.obj) and returned to the user so they can print it wherever they like. In terms of our hardware, the mask's alert system comprises of an Arduino Nano with a piezo buzzer and two push buttons (left and right side of face) wired in series. In order to get the push buttons to engage when the mask is worn, we created custom 3D parts that create a larger area for the buttons to be pushed. ## Challenges we ran into This project was touched many disciplines, and posed many difficulties. We were determined to provide the user with the ability to see how their mask would fit them in real time using AR. In order to do this, we needed a way to visualize 3D models in the web. This proved difficult due to many misleading resources and weak documentation. Simple things (like figuring out how to get a 3D model to stop rotating) took much longer than they should have, simply because the frameworks were obfuscated. AR was also very difficult to implement, particularly due to the fact that it is a new technology and the existing frameworks for it are not yet mature. Our project is one of the first we've seen placing 3D models (not images) onto user faces. ## Accomplishments that we're proud of From the machine learning side of the project, 3D face reconstruction is a very difficult problem. Luckily, our team was able to succesfully implement and use the 3DDFA state of the art machine learning model for face reconstruction. Installing and configuring the neccessary Python packages and virtual environments posed a challenge at the start, but we were able to quickly overcome this and get a working machine learning pipeline. Being able to solve this problem early on in the hackathon gave our team more time to focus on other problems, such as web 3D model visualization and constructing the facemask from our 3D face model. ## What we learned Amusingly, during this project we found that things which were supposed to be difficult turned out to be easy to implement and, conversely, the easy parts turned out to be hard. Things like front end design and integrating web frameworks turned out to be some of the most challenging parts of the project, whereas things like machine learning were easier than expected. A takeaway is that the feasibility of quickly building a project should be based not only on the difficulty of the task, but also on the quality of existing resources which can be used to build it. Good frameworks make implementing difficult projects much easier. ## What's next for Maskus Aside from refactoring the code and improving webpage design, we see several things for the project going forward. Perhaps the biggest points is developing a reliable algorithm to extract the facemask outline from a 3D face model. The one the group currently has works most of the time, but serves as the bottleneck of the system in terms of facial recognition accuracy. The UI design can be improved as well. Lastly, threeJS was found to be a pain, especially when trying to integrate it with React. It would be worth exploring simpler JavaScript frameworks. We would also love to add more functionality to the Arduino in the future, making it a 'smarter' mask. We hope to add sensors like AQS (Air Quality Sensor), creating alerts if the mask has been worn too long and needs to be replaced, and status LEDs in order to visually tell your mask is secure. In terms of future growth, Markus can comfortably be deployed as a web app and used by healthcare workers around the world in order to decrease risk of COVID transmission. It is a low cost solution designed to work with existing masks and improve upon them. Opening up the software to open source contribution is a potential way to grow, and we hope it would lead to very fast progress.
## Inspiration Since its emergence in December of 2019, the COVID-19 pandemic continues to change the lives of the global population. In Canada, the daily case rate has increased eightfold since last October. Financial security and physical health have became the top priorities of people around the world. We are aiming to protect both and giving solutions for the community during the challenging era of Covid-19. ## What it does The Protect-21 application sends a friendly reminder notification to the user to wear their mask every time they leave / depart from key locations (such as their home and place of business). Our application will also encourage the proper wear of their mask and verify them to minimize the risk of exposure. This app promotes both safe practice of wearing a mask during this pandemic and helps the user to avoid preventable costs, such as buying an overpriced, single use mask, or paying a fine for violating a mask mandate, such as those seen in British Columbia aboard transit vehicles. To further encourage hygiene, a key factor in preventing the contraction of the coronavirus, the user receives a friendly reminder notification once they return to a key location to wash their hands for 20 seconds. ## How we built it Technologies used: APIs used : Google Login, Firebase, Maps, Geolocation, Trusted Web Activity, Google Assistant, Teachable Machine Tools: React, HTML, CSS, Javascript, Ionic, Tensorflow JS Libraries: pl5 and ml5 (React) ## Challenges we ran into Serving static HTML on React, and refining the UI to maintain the accessibility of the app, such as ensuring that appearance is uniform across all platforms. ## Accomplishments that we're proud of The usability, accessibility (web, android, iOS), and cost-effectiveness (no expensive technologies or proprietary hardware used) of the app. ## What we learned JavaScript, how to convert a React app into a Progressive Web app, how to implement Google Assistant and Alexa. ## What's next for Protect-21 Implementing a functionality for saving multiple key locations.
## Inspiration Grip strength has been shown to be a powerful biomarker for numerous physiological processes. Two particularly compelling examples are Central Nervous System (CNS) fatigue and overall propensity for Cardiovascular Disease (CVD). The core idea is not about building a hand grip strengthening tool, as this need is already largely satisfied within the market by traditional hand grip devices currently. Rather, it is about building a product that leverages the insights behind one’s hand grip to help users make more informed decisions about their physical activities and overall well-being. ## What it does Gripp is a physical device that users can squeeze to measure their hand grip strength in a low-cost, easy-to-use manner. The resulting measurements can be benchmarked against previous values taken by oneself, as well as comparable peers. These will be used to provide intelligent recommendations on optimal fitness/training protocols through providing deeper, quantifiable insights into recovery. ## How we built it Gripp was built using a mixture of both hardware and software. On the hardware front, the project began with a Computer-Aided Design (CAD) model of the device. With the requirement to build around the required force sensors and accompanying electronics, the resulting model was customized exclusively for this product, and subsequently, 3-D printed. Other considerations included the ergonomics of holding the device, and adaptability depending on the hand size of the user. Exerting force on the Wheatstone bridge sensor causes it to measure the voltage difference caused by minute changes to resistance. These changes in resistance are amplified by the HX711 amplifier and converted using an ESP32 into a force measurement. From there, the data flows into a MySQL database hosted in Apache for the corresponding user, before finally going to the front-end interface dashboard. ## Challenges we ran into There were several challenges that we ran into. On the hardware side, getting the hardware to consistently output a force value was challenging. Further, listening in on the COM port, interpreting the serial data flowing in from the ESP-32, and getting it to interact with Python (where it needed to be to flow through the Flask endpoint to the front end) was challenging. On the software side, our team was challenged by the complexities of the operations required, most notably the front-end components, with minimal experience in React across the board. ## Accomplishments that we're proud of Connecting the hardware to the back-end database to the front-end display, and facilitating communication both ways, is what we are most proud of, as it required navigating several complex issues to reach a sound connection. ## What we learned The value of having another pair of eyes on code rather than trying to individually solve everything. While the latter is often possible, it is a far less efficient (especially when around others) methodology. ## What's next for Gripp Next for Gripp on the hardware side is continuing to test other prototypes of the hardware design, as well as materials (e.g., a silicon mould as opposed to plastic). Additionally, facilitating the hardware/software connection via Bluetooth. From a user-interface perspective, it would be optimal to move from a web-based application to a mobile one. On the front-end side, continuing to build out other pages will be critical (trends, community), as well as additional features (e.g., readiness score).
partial
## Inspiration Whether it's at a college party, a public outing, or during a simple stroll downtown, individuals often find themselves in uncomfortable, unwanted situations due to unwanted attention. In these moments, a discreet way to seek help is crucial. In the bartending world, this concept is known as an "**angel shot**"—a code word or drink order that discreetly signals to staff that a customer needs assistance. Phone calls are a powerful tool in these uncomfortable situations, as they not only deter unwanted attention by creating an external conversation but also offer a lifeline to contact emergency services or trusted individuals. Thus, they server as **angel shots** outside of a bartending context. **But what if no one is available to answer the call? How can individuals ensure they'll have someone to talk to when they need help the most?** ## What it does **AngelShot** simulates a realistic phone call a variety of user-customizable AI-based assistants. Users will pre-define assistants that they can request a call from whenever they're placed in an unwanted situation. These assistants can take on roles; for instance, user's can create an assistant meant to be "an uncle that they haven't seen in awhile". Additionally, assistants can be given a conversation starter. This could range from topics like sports, gardening, etc... anything that the user will feel comfortable talking about in an uncomfortable situation. When an individual is in an uncomfortable situation, they can request a call from any of their created assistants to start a normal conversation. **However, with each response, the assistant provides the user two discreet, context-based code words**. These code words trigger pre-configured safety actions of two levels. For example, in a gardening-themed conversation, the assistant may provide the words "monstera" and "weeding". * If the user says the first keyword "monstera" in their response, the assistant will know to share the user’s live conversation with emergency contacts. * If the user says the second keyword "weeding" in their response, the assistant will know to forward you to emergency services instantly. ## How we built it We deployed a Next.js application on Vercel, written in Typescript and styled using Tailwind + Shadcn and a variety of frontend libraries. For authentication, we used Clerk to allow users to quickly signup using their phone numbers. As a means of handling phone communication, we utilized **VAPI**'s API to efficiently create customizable AI assistants. VAPI streamlined the integration of voice communication in our application, allowing us to simulate realistic phone calls with AI-based assistants. For speech-to-text functionality, we used a **Deepgram's Nova 2 Phonecall** Model, specifically tailored for low-bit phone calls. This ensures accurate transcription, even if users are calling from remote areas or in noisy environments, guaranteeing that conversations and safety triggers are captured correctly. Then, to simulate natural and context-aware dialogue, we used **OpenAI's GPT-4** model. Using VAPI's API, we passed a system prompt to ensure the AI assistant can generate relevant conversations, generate two context-specific code words, and react appropriately via function calls if any of the code words are spoken. Lastly, for text-to-speech conversion, we chose **ElevenLabs' models** to create high-quality, natural-sounding voices for the AI assistants, enhancing the realism and comfort of the simulated calls. ## Challenges we ran into Our entire team didn't have WiFi for essentially half the event, so we spent the first half of the event ideating. The last half of the event was when our application truly came to life. Another issue that we ran into was correctly prompt engineering the virtual assistant. Once we found the right prompts, it was smooth sailing. ## Accomplishments that we're proud of We're proud of developing a discreet safety tool that could potentially save lives. Integrating customizable AI assistants and creating a reliable emergency response system were key milestones that we were able to accomplish. ## What's next for AngelShot We plan to enhance *AngelShot* with more customization options, additional safety features such as sharing location and real time stress level analysis. We also aim to improve accessibility, perhaps making the application into a mobile app using technologies like React Native.
## Inspiration During the day, universities around the world feature the brightest minds, most challenging ideas, and greatest opportunities to grow as both a person and an intellectual. At night, however, such campuses can feel nearly foreign or frightening as the streets lack the normal bustle of students and the light grows dim. Universities attempt to alleviate students' anxiety and worries of safety by offering a variety of services; at Cal, we have our system of Night Safety Shuttles and BearWalk staff that accompany students. Yet, these services tend to have extraordinary wait times, sometimes exceeding two hours. Enter GetHome. ## What it does GetHome connects verified students with one another such that no user has to walk home alone. By utilizing the pathing and geolocation of Google Maps, as well as the information gathering and communication offered by Cisco Meraki and Spark respectively, GetHome quickly pairs two users headed in a similar direction and provides a path such that they both get to their desired location with a minimal amount of safety risk. In addition, GetHome uses Cisco applications for data analysis and tracking to virtually accompany pairs as they make their way home: access points can ensure that users are on the correct path, and a chat-bot can double-check users have successfully gotten home. ## How we built it As a webapp, we utilized HTML5 and CSS3 to create a clean and precise landing page, with one redirect included for when a user lines up in queue. The basis of our working code is Javascript, which we used to interface with the various APIs allowing for accurate tracking and pathing of paired users. Using Google Graphs and Maps, we generated formulas to calculate and accurately map distances between all users as well as the distances between their respective destinations, while simultaneously displaying such information in easy-to-read graphical elements. A combined integration of Cisco Spark and Cisco Meraki, done via creation of Spark bots and information gathering with AWS Lambda, generates the heatmap for users to find their partner as well as open an avenue of communication between the two. Obscured from the user, we also rely on Meraki's Real Time Location Services (RTLS) to track whether individuals are following the anticipated path home; deviations are viewed as hazardous and can be handled by our companion-bot, which checks in with users to see if they're okay. ## Challenges we ran into Utilizing the REST API efficiently was difficult, seeing as how none of our team members had previous experience working with HTTP POST requests. In addition, working with the servers provided for accessing real-time data of our immediate location proved difficult, as some connection errors and faulty permissions prevented us from dedicating our full effort towards completely understanding the use cases of the functions and services provided alongside the data. ## Accomplishments that we're proud of As a team, we are proud of creating our first ever Spark bot and formulating conclusions based on real-time data via Meraki's Dashboard. Meshing together multiple APIs can become messy at times, so we are extremely proud of our clean implementations that serve to precisely and efficiently merge the varied applications we delved into. ## What we learned Throughout this hacking process, we learned a great deal about how to work with Node-RED for Cisco Meraki and Spark queries, as well as how to integrate such applications with services like AWS Lambda to properly retrieve points of interest, such as MAC addresses for tracking. The less-experienced of our team also learned how to utilize APIs in a very general sense within HTML and Javascript; as a whole, we built upon past experiences to further increase our efficiency and teamwork in regards to formulating ideas and bringing them to manifestation through research and dedicated work. ## What's next for GetHome Meraki's ability to precisely locate devices through triangulation by access points as well as the flexibility of heatmaps allows for the possibility of creating paths that avoid high-risk areas; this would be a further step in preventative safety, reaching beyond what our application currently utilizes Meraki's location services for. Combined, having Meraki anticipate the next access point the user should be pinged at as well as formulating the path so it is not only the shortest path, but also the safest, would be extremely beneficial for GetHome and its users.
## DEMO WITHOUT PRESENTATION ## **this app would typically be running in a public space** [demo without presentation (judges please watch the demo with the presentation)](https://youtu.be/qNmGr1GJNrE) ## Inspiration We spent **hours** thinking about what to create for our hackathon submission. Every idea that we had already existed. These first hours went by quickly and our hopes of finding an idea that we loved were dwindling. The idea that eventually became **CovidEye** started as an app that would run in the background of your phone and track the type and amount of coughs throughout the day, however we discovered a successful app that already does this. About an hour after this idea was pitched **@Green-Robot-Dev-Studios (Nick)** pitched a variation of this app that would run on a security camera or in the web and track the coughs of people in stores (anonymously). A light bulb immediately lit over all of our heads as this would help prevent covid-19 outbreaks, collect data, and is accessible to everyone (it can run on your laptop as opposed to a security camera). ## What it does **CovidEye** tracks a tally of coughs and face touches live and graphs it for you.**CovidEye** allows you to pass in any video feed to monitor for COVID-19 symptoms within the area covered by the camera. The app monitors the feed for anyone that coughs or touches their face. **\_For demoing purposes, we are using a webcam, but this could easily be replaced with a security camera. Our logic can even handle multiple events by different people simultaneously. \_** ## How we built it We used an AI called PoseNet built by Tensorflow. The data outputted by this AI is passed through through some clever detection logic. Also, this data can be passed on to the government as an indicator of where symptomatic people are going. We used Firebase as the backend to persist the tally count. We created a simple A.P.I. to connect Firebase and our ReactJS frontend. ## Challenges we ran into * We spent about 3 hours connecting the AI count to Firebase and patching it into the react state. * Tweaking the pose detection logic took a lot of trial and error * Deploying a built react app (we had never done that before and had a lot of difficulty resulting in the need to change code within our application) * Optimizing the A.I. garbage collection (chrome would freeze) * Optimizing the graph (Too much for chrome to handle with the local A.I.) ## Accomplishments that we're proud of * **All 3 of us** We are very proud that we thought of and built something that could really make a difference in this time of COVID-19, directly and with statistics. We are also proud that this app is accessible to everyone as many small businesses are not able to afford security cameras. * **@Alex-Walsh (Alex)** I've never touched any form of A.I/M.L. before so this was a massive learning experience for me. I'm also proud to have competed in my first hackathon. * **@Green-Robot-Dev-Studios (Nick)** I'm very proud that we were able to create an A.I. as accurate as it in is the time frame * **@Khalid Filali (Khalid)** I'm proud to have pushed my ReactJS skills to the next level and competed in my first hackathon. ## What we learned * Posenet * ChartJS * A.I. basics * ReactJS Hooks ## What's next for CovidEye -**Refining** : with a more enhanced dataset our accuracy would greatly increase * Solace PubSub, we didn't have enough time but we wanted to create live notifications that would go to multiple people when there is excessive coughing. * Individual Tally's instead of 1 tally for each person (we didn't have enough time) * Accounts (we didn't have enough time)
partial
We created this app in light of the recent wildfires that have raged across the west coast. As California Natives ourselves, we have witnessed the devastating effects of these fires first-hand. Not only do these wildfires pose a danger to those living around the evacuation area, but even for those residing tens to hundreds of miles away, the after-effects are lingering. For many with sensitive respiratory systems, the wildfire smoke has created difficulty breathing and dizziness as well. One of the reasons we like technology is its ability to impact our lives in novel and meaningful ways. This is extremely helpful for people highly sensitive to airborne pollutants, such as some of our family members that suffer from asthma, and those who also own pets to find healthy outdoor spaces. Our app greatly simplifies the process of finding a location with healthier air quality amidst the wildfires and ensures that those who need essential exercise are able to do so. We wanted to develop a web app that could help these who are particularly sensitive to smoke and ash to find a temporary respite from the harmful air quality in their area. With our app air.ly, users can navigate across North America to identify areas where the air quality is substantially better. Each dot color indicates a different air quality level ranging from healthy to hazardous. By clicking on a dot, users will be shown a list of outdoor recreation areas, parks, and landmarks they can visit to take a breather at. We utilized a few different APIs in order to build our web app. The first step was to implement the Google Maps API using JavaScript. Next, we scraped location and air quality index data for each city within North America. After we were able to source real-time data from the World Air Quality Index API, we used the location information to connect to our Google Maps API implementation. Our code took in longitude and latitude data to place a dot on the location of each city within our map. This dot was color-coded based on its city AQI value. At the same time, the longitude and latitude data was passed into our Yelp Fusion API implementation to find parks, hiking areas, and outdoor recreation local to the city. We processed the Yelp city and location data using Python and Flask integrations. The city-specific AQI value, as well as our local Yelp recommendations, were coded in HTML and CSS to display an info box upon clicking on a dot to help a user act on the real-time data. As a final touch, we also included a legend that indicated the AQI values with their corresponding dot colors to allow ease with user experience. We really embraced the hacker resilience mindset to create a user-focused product that values itself on providing safe and healthy exploration during the current wildfire season. Thank you :)
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
## Inspiration Due to the recent unanticipated spread of wildfires, especially across Canada, the air quality has worsened in the most affected regions, causing breathing difficulties in the most affected areas, such as Montréal, Ottawa, Toronto, and Yellowknife. Thus, we were inspired to create an application that allows users to determine the air quality in the various regions of their city to make informed decisions regarding safety precaution measures against poor air quality. ## What it does Airate is an application that provides users with the air quality rating reported by the government and allows them to provide their own rating of the air in their region each day, building up an air quality map displaying the spread of user-inputted air quality data of numbers based on the Air Quality Health Index (a number from 1 to 10+, displaying the best to worst air quality). The app also includes a social aspect that allows users to connect with others in their region, displaying a leaderboard of the most active users and their points collected from logging in their daily air quality rating. ## How we built it We split into two pairs, where two of us worked on the front-end development, creating the user interface using Figma before converting it to HTML and CSS code. The other two of us worked on the back end, some aspects of it including the creation of databases using PHP and the implementation of APIs such as Google Cloud. Finally, we combined the front and the back end by allowing for interactions between the two to create our final, usable, application. ## Challenges we ran into One challenge for the front end was accurately converting the complex Figma models into HTML and CSS as beginner HTML and CSS users, especially given that some of our graphics were originally reformatted in Figma and did not all carry over the same modifications. Regarding the backend, some challenges involved the decision between various possible APIs and languages given the different pricing points and features provided by each available framework. ## Accomplishments that we're proud of One accomplishment that we are really proud of is successfully completing the interactions between the backend and the front-end programming, given that both the front-end and the backend components of the project were already relatively complex on their own. Additionally, the creation of the UI/UX design is an aspect of our project that we put a lot of effort into and believe that it paid off, given the sleek and modern design of the application. ## What we learned For this project, all of the members decided to step out of their comfort zones, where if we were more well-versed with the front end, we decided to work on the backend development instead, and vice versa, in order to provide ourselves with the opportunity to develop our well-roundedness and full-stack development potential. Thus, each of us developed further the skillset associated with the role that we took on, such as working with databases and API in developing the backend or developing the flow of front-end design from idea visualization on Figma to the actual creation of the designs using HTML and CSS. ## What's next for Airate Next for Airate, we plan to implement AI image recognition to analyze pictures of the air and determine the air quality rating to further improve the accuracy of the ratings provided by the users. To also increase the information with which we provide users, we also will incorporate open-source data from weather APIs and Environment Canada air quality statistics. Overall, from AI image recognition to data analytics, we plan to bring many more features to the application to help users increase their health awareness and improve their air quality safety precaution measures.
winning
## Inspiration Coming into this hackathon I wanted to create a project that I found interesting and one that I could see being used. Many of my projects I have done in the past were interesting, but there were factors that would always keep them from being widely used. Often this is a barrier to entry. They required extensive setup for the more minimal reward that they give. For Hack Western one of my goals was the create something that had a low barrier to entry. However a low barrier to entry means nothing if the use is not up to par. I wanted to create a project that many people could use. Based on these two concepts, ease of use and wide ranging use, I decided to create a chat bot that automates answering science questions. ## What it does What my project does is very simple. On the messaging platform, discord, you message my bot a question. It will attempt to search for a relevant answer. If one is found then it will give you a bit of context and the answer that it found. ## How I built it I built this project in three main sections. The first section is my sleuther. This searches the web for various science based questions as well as their answers. When a question is found, I use IBM's natural language processing api to determine tags that represent the topic of the question. All of this data is then stored on an Algolia database where it can be quickly accessed later. The second section of my project is the server. To implement this server I used StdLib to easily create a web api. When this api is accessed it queries the Algolia database to retrieve any relevant questions and returns the best entry. The third and final part of my project is the front end Discord bot. When you send the bot a message it will generate tags to determine the general topic of the question and using these it will query the Algolia index. It does this by calling the StdLib endpoint that was setup as the server. Overall these three sections combine to create my final project. ## Challenges I ran into The first challenge that I ran into was unfamiliar technology. I had never used StdLib before and getting it to work was a struggle at times. Thankfully the mentors from StdLib were very helpful and allowed me to get my service up an running with not too much stress. The main challenge for this project though it trying to match questions. As a user's questions and my database questions will not be worded the exact same way a simple string match would not suffice. In order to match the questions the general meaning of the question would need to be found and then those could be roughly matched. To try and work around this I tried various methods before setting on a sort of tag based system. ## Accomplishments that I'm proud of To be honest this project was not my initial hackathon project. When I first started the idea was very different. The same base theme of low barrier to entry and widely usable, but the actual subject was quite different. Unfortunately part way through that project there was an insurmountable issue that did not really allow that project to progress. This forced me to pivot and find a new idea. This pivot came many hours into the hackathon already which severely reduced the time for my current hack. Because of this, the accomplishment that I think I am most proud of is the fact that I am here able to submit this hack. ## What I learned Because I was forced to pivot on my idea partway through the hackathon a fair amount of stress was created for me. This fact really taught me the value of planning - most particularly determining the scope of the project. If I had planned ahead and wrote out everything that my first hack would have entailed then many of the issues that were encountered in it would have been able to be avoided. ## What's next for Science-Bot I want to continue to improve my question matching algorithm so that it more questions can be matched. But beyond simply improving current functionality, I would like to add new scope to the questions my bot handles. This could be either increasing in scope, or increasing the depth of the questions. An interesting new topic would be questions about a very specific scientific area. I would likely need to be much more selective in the questions I match and this is something I think would pose a difficult and interesting challenge.
## Inspiration memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers. ## What it does NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver." ## How we built it We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework. ## Challenges we ran into We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project. A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch. ## Accomplishments that we're proud of We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up. ## What we learned We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration. ## What's next for NWMemes2017Web We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem.
## Inspiration When a new semester begins, we often ask our friends for their new schedules and we manually match the course name, course code, and check the section of courses we have together (which takes a lot of time, is mentally exhausting, and laborious). Sometimes, we find ourselves hopeless, in need help with our homework/projects and our friends are unavailable or we want to meet new friends in our classes. ## What it does By exporting your course list on SSC via ical and logging in with your Facebook account, we match you to your classmates so you don't have to do the manual work and we help keep track of your classmates in your profile. ## How we built it We built this using React and AWS. .tech domain: coursecompanion.tech .com domain: coursecompanion.com ## Challenges we ran into -Converting the .ics (ical) file type to a usable file format, and matching the course codes with facebook friends -Facebook authentication -Configuring UI frameworks ## Accomplishments that I'm proud of -Utilized modern material design for an efficient user experience -Build an app that we will use almost daily -Configured Facebook authentication ## What we learned -We feel more confident about innovating a useful tool that solves an everyday problem -Most of our team had little experience with React and Javascript so it was significant learning curve to learn the ropes in less than 24 hours ## What's next for Course Companion -We hope to release this app to the students at UBC and expand to other universities, while implementing new features as we feel according to feedback from students. -Integrate CWL Login for seamless integration for importing .ical
partial
## Inspiration As we accelerate further into the digital age, the amount of information we are confronted with increases, and it becomes harder to sort though the noise. As we advance as a civilization, it’s crucial that we get the information issue right - we need to know how to talk to each other honestly and need a system where valuable information rises to the top. *AntiSpin aims to fully crowdsource the way individuals receive news and improve the signal-to-noise ratio of internet information flow.* Having participated in two hackathons between the time we registered for nwHacks and today, we spent the time brainstorming and refining our idea. During LHD, we wanted to prove to ourselves that we were capable of working with HTML, CSS and Javascript to design a rough static website. At Litehacks, we spent our time away from coding - instead refining the structure of website (how everything would function in theory) as well as the aesthetics. It was an excellent brainstorming opportunity and guided our design. For nwHacks, being confident in our HTML/CSS/JS/ and design abilities, we decided to a) redesign everything from scratch based on our brainstorming at Litehacks and b) host the website publicly and make it interactive - meaning that users could change data, and the public website would then retrieve that data from local users. ## What it does Able to host all of our website files on a public domain - nwhacks.savsidorov.com - with the functionality of being interactive. A user can go to the website and create new items. ## How we built it Originally, the desired functionality was to have users access the domain, log in through some kind of authentication system, and be able to create topic pages, source pages, and rate a source's coverage of a given topic. In this way, we wanted to set up the bare minimum functionality needed for a bottom-up, crowdsourced sense-making system. We tried several different approaches over the 24 hours. The front end stayed the most consistent, using HTML, CSS, Javascript with React.js and Node.js. The back end was a lot more varied - we started off with trying to use Standard Library, switched to a Django + mySQL combination before finally settling on a quick Node.js implementation. ## Challenges we ran into The overarching challenge was communicating between the back end and the front end. We couldn't figure out a way to exchange data between a client and the server - due to limited knowledge and rapidly going though different solutions. For the first eight hours we worked with Standard Library, before deciding that Django would be a more long term solution, one with significantly more documentation, and finer control. Most of the rest of the time was spent learning and implementing the Django framework. However, we were not able to implement everything on time, so we ended up switching to Node.js for a quick implementation. ## Accomplishments that we're proud of Although we haven't accomplished all that we had hoped for (not even close), being forced to learn things on the fly in this high stakes environment was certainly a benefit. One of us familiarized himself further with the inner workings of React.js and Node.js for front-end development. And the other one went from zero knowledge of Django framework to *some* knowledge of Django framework - enough to get a basic website back end up and running (although not hosted publicly). We feel that this newly gained knowledge will help us flesh out both the front and back ends of our project going forward, as well as generally benefit us in the future. ## What we learned As mentioned above - we learned (and further honed) important technical skills - namely Node.js, React.js and Django. It's also fair to say that we got to practice communicating problems to each other. Over the many hours that we worked, keeping a clear mind and communicating exactly what works/ does not/ what the solution could be was crucial. Additionally, I personally learned that *way* more advance prep should have been done. We struggled a lot with the back-end - the ability to store and retrieve data that users send out. I'm confident that if I had learned the Django framework far in advance, we would have been able to get the back end set up much more easily and be more successful. I plan on thoroughly researching it in the weeks to come in order to finally achieve our milestones for core website functionality. ## What's next for Interactive AntiSpin More learning, and more doing. We're more aware of the scope of the task before us than we have ever been, and it is much greater than we had anticipated. However, we're confident more than ever that this idea for a croudsourced news website shows promise. Participating in hackathons gave us a stellar start and taught us valuable skills, but most of the road still lies ahead. Despite the setbacks, we plan to keep working on the project in order to reach core functionality in the coming weeks and months, and introduce something that we are both proud of to the UBC community.
## Inspiration With multiple members of our team having been a part of environmental conservation initiatives and even running some of our own, an issue we have continually recognized is the difficulty in reaching out to community members that share the same vision. Outside of a school setting, it's difficult to easily connect with initiatives and to find others interested in them, and so we wanted to solve that issue by centralizing a space for these communities. ## What it does The demographic here is two-fold. Users that are interested in volunteering have the capability of logging in, and uses their provided location to narrow down nearby events to a radius of their choosing. This makes sorting through hundreds of events quick and easy, and provides a clear pathway to convert the desire to help into tangible change. Users interested in organizing their own events can create accounts and use a simple process to create an event with all its information and post it both to their own page's feed and to the main initiatives list that volunteers are able to browse through. With just a few clicks, an event can be made available to the many volunteers eager to make a difference. ## How we built it As this project is a website, and many of our team are beginners, we worked mostly with HTML, CSS, and JS. We also integrated bootstrap to help with styling and formatting for the pages to improve user experience. ## Challenges we ran into As relative beginners, one challenge we ran into was working with JavaScript files across multiple HTML pages, and finding that parts of our functionality were only accessible using node.js. To work around this, we focused on rebranching our website pages to ensure easier connections and finding ways to make our code simpler and more comprehensive. ## Accomplishments that we're proud of We're proud of the community that we built with each other during this hackathon. We truly had so much passion for making this a working product, and loved our logo so much we event made stickers! On a technical level, as first-time users of JavaScript, we're particularly proud of our work with connecting HTML input, using JavaScript for string handling, and then creating new elements on the website. Being able to collect input initiatives into our database and display them with live updates was for us, the most difficult technical work, but also by far the most rewarding. ## What we learned For our team as a whole, the biggest takeaway has been a strongly renewed interest in web development and the intricacies behind connecting so many different aspects of functionality using JavaScript. ## What's next for BranchOut Moving forward, we're looking to integrate node.js to supplement our implementation, and to increase connectivity between the different inputs available. We truly believe in our mission to promote nature conservation initiatives, and hope to further expand this into an app to increase accessibility and improve user experience.
## Inspiration Want to see how a product, service, person or idea is doing in the court of public opinion? Market analysts are experts at collecting data from a large array of sources, but monitoring public happiness or approval ratings is notoriously difficult. Usually, focus groups and extensive data collection is required before any estimates can be made, wasting both time and money. Why bother with all of this when the data you need can be easily mined from social media websites such as Twitter? Through aggregating tweets, performing sentiment analysis and visualizing the data, it would be possible to observe trends on how happy the public is about any topic, providing a valuable tool for anybody who needs to monitor customer satisfaction or public perception. ## What it does Queries Twitter Search API to return relevant tweets that are sorted into buckets of time. Sentiment analysis is then used to categorize whether the tweet is positive or negative in regards to the search term. The collected data is visualized with graphs such as average sentiment over time, percentage of positive to percentage of negative tweets, and other in depth trend analyses. An NLP algorithm that involves the clustering of similar tweets was developed to return a representative summary of good and bad tweets. This can show what most people are happy or angry about and can provide insight on how to improve public reception. ## How we built it The application is split into a **Flask** back-end and a **ReactJS** front-end. The back-end queries the Twitter API, parses and stores relevant information from the received tweets, and calculates any extra statistics that the front-end requires. The back-end then provides this information in a JSON object that the front-end can access through a `get` request. The React front-end presents all UI elements in components styled by [Material-UI](https://material-ui.com/). [React-Vis](https://uber.github.io/react-vis/) was utilized to compose charts and graphs that presents our queried data in an efficient and visually-appealing way. ## Challenges we ran into Twitter API throttles querying to 1000 tweets per minute, a number much less than what this project needs in order to provide meaningful data analysis. This means that by itself, after returning 1000 tweets we would have to wait another minute before continuing to request tweets. With some keywords returning hundreds of thousands of tweets, this was a huge problem. In addition, extracting a representative summary of good and bad tweet topics was challenging, as features that represent contextual similarity between words are not very well defined. Finally, we found it difficult to design a user interface that displays the vast amount of data we collect in a clear, organized, and aesthetically pleasing manner. ## Accomplishments that we're proud of We're proud of how well we visualized our data. In the course of a weekend, we managed to collect and visualize a large sum of data in six different ways. We're also proud that we managed to implement the clustering algorithm. In addition, the application is fully functional with nothing manually mocked! ## What we learned We learnt about several different natural language processing techniques. We also learnt about the Flask REST framework and best practices for building a React web application. ## What's next for Twitalytics We plan on cleaning some of the code that we rushed this weekend, implementing geolocation filtering and data analysis, and investigating better clustering algorithms and big data techniques.
losing
TCP Network chat server Server installation instructions: The components for the server are all in the chat.go file. Download Go version 1.9 from <https://golang.org/dl/>. Set up the development environment and compile the files with 'go build'. Then run the executable that results Client installation instructions: cd into the electron-client repository. Run npm start to start the client. --- This application is a network chat server designed for peer to peer communication over a network or wireless communication. Backend developed using Golang. Frontend application built with Electronjs
## Inspiration As fellow students currently attending UC Berkeley, we often are put into situations of potential danger or harm when walking back from class late at night or attending a fun party. We face over 200 "WarnMe" emails sent to students each semester, signaling prevalent safety concerns. As well as, considering that UC Berkeley ranks in the top 10% of universities with safety concerns nationally its no wonder that over 40% of students resort to carrying pepper spray, sirens, and other self-defense tools for peace of mind. 60% of students hesitate to call friends late at night for company, fearing they might disturb them. ## What it does We've crafted a solution tailored to the unique needs of our UC Berkeley community. A platform where real-time matching connects students instantly, not just based on proximity or destination or where they're headed, but also on shared interests and mutual friends. This isn't just an app; it's a verified community. Every user undergoes a mandatory student ID verification, ensuring that the connections made are genuine and exclusive to our campus. ## How we built it As a team of beginner-level hackers, we started off with the objected oriented knowledge we learned from UC Berkeley courses such as CS 61A. We wrote some skeleton python code in visual studio code, where we We drew diagrams and found the relationships between different objects in our backend. Through Convex’s built-in database, we planned our design through constructing our database schema in typescript and server functions in javascript. Thus, our server-side database queries would automatically cache and subscribe to data, powering a real-time useQuery hook in Convex’s Python client. ## Challenges we ran into The complexity of our idea made it hard to incorporate full stack development technologies in a short timeframe. Additionally, we had to learn new programming languages and techniques in order to implement our solution as well. ## Accomplishments that we're proud of We are most proud of our idea, which solves a problem that UC Berkeley students face everyday. However, we are also proud of our proof of concept and our structure for future plans. Lastly, we are proud of the memories, connections/friends, and learning experiences we gained here at Cal Hacks. ## What we learned Within the realms of our own project, we learned how to implement full stack development technologies with backend database servers. We learned new programming languages and techniques, including Convex, Javascript, Cockroach DB, Flutter, and Figma. Additionally, we learned more about the ideation process and how to collectively create a solution that aligns with the vision of a team. From the Cal Hacks itself, we learned more about all the sponsors and their company visions, while also grasping extensive knowledge through the seminars and workshops. ## What's next for WalkMe Integrating UC Berkeley's "BearWalk" with the "Warn Me" email system offers a promising avenue to significantly enhance student safety. Imagine, each time a "Warn Me" email is dispatched, the BearWalk app immediately extracts the location and nature of the warning. This data then color-codes the app's map, with areas of recent incidents highlighted in red, potential concerns in yellow, and safe or patrolled zones in green. This dynamic system not only visually informs students of high-risk areas but also suggests alternative, safer routes for their journey. Furthermore, if a student happens to be near a high-risk zone and requests a BearWalk companion, the system can give their request priority, ensuring they receive assistance promptly. This integration ensures that the BearWalk companion is also well-informed of risk zones, guiding students through the safest possible routes.
## Inspiration We were working with 1and1's servers as a team for another app, but we realized that it was a pain to collaborate on editing the server. ## What it does Hence we decided to make a chatbot that allows us to edit servers using messaging apps. ## How we built it We used Facebook's messenger services to run our bot. We used python and 1and1's api for all the backend server work. ## Challenges we ran into Our challenges revolved around maneuvering python3 which many of our team members had little experience with. ## Accomplishments that we're proud of We are proud to have a completed chat bot. ## What we learned We learned how to use python, API's, and servers. ## What's next for ChatterWorks We plan to add more functionality to this wonderful product, and increase our NLP abilities.
losing
## Inspiration During quarantine, a new show, called Squid Game, appeared on my screen on Netflix. I clicked on it, and I was soon hooked to the show. One of the games I remember was Red Light Green Light. Because of this experience, I wanted to recreate this game somehow, but how? ## What it does My game, Red Light Green Light, is a Roblox game that allows users to experience the famous South Korean game from Squid Game. Users will have to stop at Red Light, but if they do move at Red Light, they will die. However, at Green Light, users are able to move or stand still. ## How we built it I built this using Roblox studio. Roblox studio has a design tool, in which I used to create all the design elements in the game. To make this game all come to life, I used Lua. Lua allowed me to code the whole game, like the Red Light and Green Light functions. ## Challenges we ran into Throughout coding Red Light Green Light, I faced several bugs. Many of those bugs were from small errors, like capitalizing and misspelling variables. However, I struggled to get the invisible wall to disappear when the game started, make the players die at Red Light, and keep track of user's wins. ## Accomplishments that we're proud of I am most proud that I was able to get this project, Red Light Green Light, done as this project took a very long time to do. ## What we learned I learned a lot more about Lua and Roblox Studio's design tool as I was unfamiliar with these. ## What's next for Red Light Green Light I want to include some updates to this game, like adding item shops and more. I hope for other people around the world to experience my game and enjoy playing Red Light Green Light
immerse yourself in a platformer with eye-gaze tracking and eye blinking detection with cute graphics ### inspiration celeste, sky, spiritfarer, and the constancy of the feeling that anywhere would be better than now ### what it does its a game with a unique gimmick and a story and theme we really like! ### how we built it we used pygame, 2 eye-tracking apis we found online, and 24 hours of blood sweat and tears :) ### challenges we ran into we ran into a lot of issues head-on just trying to install dependencies and finding softwares that would both work and work together. afterwards, we all had to learn how to work in pygame, and it was challenging to build a solid framework that could support both game functions and our special eye-tracking feature. ### accomplishments that we're proud of we're proud of having the solid foundations for a functional game, as well as successful creative elements such as cutscene art, cute animations, and nostalgic (original!) acoustic music. we feel that we really conveyed the vision we had in mind! ### what we learned we learned that there's a lot that goes into designing a fully functioning game as a product. for instance, it took us quite a bit of time to figure out how to integrate the third-party softwares, and it was also difficult just to standardize the game across 4 different laptops on top of trying to make all our aesthetics fit together. there was much more time and thought put in to making our game than simply having an enticing vision. ### whats next the first order of business is to finish a working prototype for the entire game we have planned, refactor all the code, then make the interface and game mechanics more and more polished. we would also like to transition from pygame to a more versatile game engine such as godot or unity. we'd also love to finish our story!
## Inspiration * COVID-19 is impacting all musicians, from students to educators to professionals everywhere * Performing physically together is not viable due to health risks * School orchestras still need a way to perform to keep students interested and motivated * A lot of effort is required to put together separate recordings virtually. Some ensembles don't have the time or resources to do it ## What it does Ludwig is a direct response to the rise of remote learning. Our online platform optimizes virtual interaction between music educators and students by streamlining the process of creating music virtually. Educators can create new assignments and send them out with a description and sheet music to students. Students can then access the assignment, download the sheet music, and upload their recordings of the piece separately. Given the tempo and sampling specificity, Ludwig detects the musical start point of each recording, then syncs and merges them into one combined WAV file. ## Challenges we ran into Our synchronization software is not perfect. On the software side, we have to balance the tradeoff between sampling specificity and delivery speed, so we sacrifice pinpoint synchronization to make sure our users don't get bored while using the platform. On the human side, without the presence of the rest of the ensemble, it is easy for students to play at an inconsistent tempo, play out of tune, or make other mistakes. These sorts of mistakes are hard for any software to adapt to. ## What's next for Ludwig We aim to improve Ludwig's syncing algorithm by adjusting tuning and paying more attention to tempo. We will also refine and expand Ludwig's platform to allow teachers to have different classes with different sets of students.
losing
## Inspiration As high school seniors on the cusp of a new academic journey, we realized the looming academic expenses that are associated with attending college. In order to simplify navigating the maze of college finances we created **‘brokemenot.us’**, a website for college students to explore the financial space. We envisioned this tool to alleviate these financial pressures by providing budgeting tools, financial literacy courses/blogs, bank account management, a budgeting system as well as a student loan management/acquisition system. ## What it does **‘brokemenot.us’** is a web application that provides college students with a way to manage their finances simply. The application allows students to connect their Capital One account, or create a new one entirely, through the Nessie API. Then through the account information available students are able to view their balance, transactions, and bills. This along with our budgeting/financial management system allows students to easily track their expenditures and see if they stay within their determined budget. Additionally, the app includes coursework in the form of articles and blogs to increase the student’s financial literacy. Finally, there is a student loan finder for the students based on their financial information. While Capital One accounts have an in-built loan option, students also have access to other financial aid methods. ## How we built it For the frontend, we used Taipy, a framework which allowed us to build a website in Python. The framework allowed us to build a very elegant and user-friendly interface. Thanks to the aforementioned framework, we built the backend using Python too. This allowed us to incorporate the powerful APIs of Twilio and Capital One! We were able to embed products from the Google Suite to enhance user experience. We also were able to use a GoDaddy domain to make our site easier to access across the globe. ## Challenges we ran into We had a great deal of difficulty getting accustomed to the Taipy web framework. Taipy allows for robust full stack web development solely in Python, simplifying the development process. However, us being used to traditional development using HTML, CSS, JS, and frameworks such as React, we found it difficult to adopt this new style of working. Firstly, we had to make the decision of using Taipy’s own Markdown for the UI or HTML. Being experienced with HTML, we went with that, but we quickly found that the documentation didn’t include all we needed it to. Furthermore, Taipy, being a framework on the rise, there wasn’t a large community of developers to turn to with questions. However, after much effort, we were able to effectively use the Taipy framework, and we thought it was a great way to have all of our code in one, organized place. We are proud of ourselves for learning a new skill, and we see ourselves using Taipy for future endeavors. ## Accomplishments that we're proud of We are really proud of what we have accomplished during this hackathon. Firstly, we are proud that we managed to utilize Taipy, a new and unique framework that none of us have ever heard of before. We are proud that we were able to incorporate many additional features such as Capital One and Twilio API! While we have a lot of work to do if we want to perfect our project, we are proud of what we have completed within the given timeframe. And most importantly, we enjoyed every minute of our time on the UPenn campus. ## What we learned Through the past 36 hours, we’ve used several technologies that we previously have not used. We utilized Twilio, Capital One Nessie API, GoDaddy, Taipei as well as GoDaddy. To use these new technologies we had to learn new skills. Solely through the documentation for each, we navigated through its uses and features to produce a complex application. ## What's next for BrokeMeNot Our goal is to make it even more user-friendly for future college-going students. We would like to move away from the embedded Google Suite products to make our site more convenient and interactive. One of our goals going into the website was to incorporate an AI Chatbot of sorts in our website to help guide students searching for resources or options. Additionally, we wish to include AI-generated personal recommendations for the user based on their financial situation. While we did not have enough time to implement it, we would like to implement it in the future.
## Inspiration Students are often put into a position where they do not have the time nor experience to effectively budget their finances. This unfortunately leads to many students falling into debt, and having a difficult time keeping up with their finances. That's where wiSpend comes to the rescue! Our objective is to allow students to make healthy financial choices and be aware of their spending behaviours. ## What it does wiSpend is an Android application that analyses financial transactions of students and creates a predictive model of spending patterns. Our application requires no effort from the user to input their own information, as all bank transaction data is synced in real-time to the application. Our advanced financial analytics allow us to create effective budget plans tailored to each user, and to provide financial advice to help students stay on budget. ## How I built it wiSpend is build using an Android application that makes REST requests to our hosted Flask server. This server periodically creates requests to the Plaid API to obtain financial information and processes the data. Plaid API allows us to access major financial institutions' users' banking data, including transactions, balances, assets & liabilities, and much more. We focused on analysing the credit and debit transaction data, and applied statistical analytics techniques in order to identify trends from the transaction data. Based on the analysed results, the server will determine what financial advice in form of a notification to send to the user at any given point of time. ## Challenges I ran into Integration and creating our data processing algorithm. ## Accomplishments that I'm proud of This was the first time we as a group successfully brought all our individual work on the project and successfully integrated them together! This is a huge accomplishment for us as the integration part is usually the blocking factor from a successful hackathon project. ## What I learned Interfacing the Android and Web server was a huge challenge but it allowed us as developers to find clever solutions by overcoming encountered roadblocks and thereby developing our own skills. ## What's next for wiSpend Our first next feature would be to build a sophist acted budgeting app to assist users in their budgeting needs. We also plan on creating a mobile UI that can provide even more insights to users in form of charts, graphs, and infographics, as well as further developing our web platform to create a seamless experience across devices.
## Inspiration We wanted to get home safe ## What it does Stride pairs you with walkers just like UBC SafeWalk, but outside of campus grounds, to get you home safe! ## How we built it React Native, Express JS, MongoDB ## Challenges we ran into Getting environment setups working ## Accomplishments that we're proud of Finishing the app ## What we learned Mobile development ## What's next for Stride Improve the app
partial
## Inspiration Retinal degeneration affects 1 in 3000 people, slowly robbing them of vision over the course of their mid-life. The need to adjust to life without vision, often after decades of relying on it for daily life, presents a unique challenge to individuals facing genetic disease or ocular injury, one which our teammate saw firsthand in his family, and inspired our group to work on a modular, affordable solution. Current technologies which provide similar proximity awareness often cost many thousands of dollars, and require a niche replacement in the user's environment; (shoes with active proximity sensing similar to our system often cost $3-4k for a single pair of shoes). Instead, our group has worked to create a versatile module which can be attached to any shoe, walker, or wheelchair, to provide situational awareness to the thousands of people adjusting to their loss of vision. ## What it does (Higher quality demo on google drive link!: <https://drive.google.com/file/d/1o2mxJXDgxnnhsT8eL4pCnbk_yFVVWiNM/view?usp=share_link> ) The module is constantly pinging its surroundings through a combination of IR and ultrasonic sensors. These are readily visible on the prototype, with the ultrasound device looking forward, and the IR sensor looking to the outward flank. These readings are referenced, alongside measurements from an Inertial Measurement Unit (IMU), to tell when the user is nearing an obstacle. The combination of sensors allows detection of a wide gamut of materials, including those of room walls, furniture, and people. The device is powered by a 7.4v LiPo cell, which displays a charging port on the front of the module. The device has a three hour battery life, but with more compact PCB-based electronics, it could easily be doubled. While the primary use case is envisioned to be clipped onto the top surface of a shoe, the device, roughly the size of a wallet, can be attached to a wide range of mobility devices. The internal logic uses IMU data to determine when the shoe is on the bottom of a step 'cycle', and touching the ground. The Arduino Nano MCU polls the IMU's gyroscope to check that the shoe's angular speed is close to zero, and that the module is not accelerating significantly. After the MCU has established that the shoe is on the ground, it will then compare ultrasonic and IR proximity sensor readings to see if an obstacle is within a configurable range (in our case, 75cm front, 10cm side). If the shoe detects an obstacle, it will activate a pager motor which vibrates the wearer's shoe (or other device). The pager motor will continue vibrating until the wearer takes a step which encounters no obstacles, thus acting as a toggle flip-flop. An RGB LED is added for our debugging of the prototype: RED - Shoe is moving - In the middle of a step GREEN - Shoe is at bottom of step and sees an obstacle BLUE - Shoe is at bottom of step and sees no obstacles While our group's concept is to package these electronics into a sleek, clip-on plastic case, for now the electronics have simply been folded into a wearable form factor for demonstration. ## How we built it Our group used an Arduino Nano, batteries, voltage regulators, and proximity sensors from the venue, and supplied our own IMU, kapton tape, and zip ties. (yay zip ties!) I2C code for basic communication and calibration was taken from a user's guide of the IMU sensor. Code used for logic, sensor polling, and all other functions of the shoe was custom. All electronics were custom. Testing was done on the circuits by first assembling the Arduino Microcontroller Unit (MCU) and sensors on a breadboard, powered by laptop. We used this setup to test our code and fine tune our sensors, so that the module would behave how we wanted. We tested and wrote the code for the ultrasonic sensor, the IR sensor, and the gyro separately, before integrating as a system. Next, we assembled a second breadboard with LiPo cells and a 5v regulator. The two 3.7v cells are wired in series to produce a single 7.4v 2S battery, which is then regulated back down to 5v by an LM7805 regulator chip. One by one, we switched all the MCU/sensor components off of laptop power, and onto our power supply unit. Unfortunately, this took a few tries, and resulted in a lot of debugging. . After a circuit was finalized, we moved all of the breadboard circuitry to harnessing only, then folded the harnessing and PCB components into a wearable shape for the user. ## Challenges we ran into The largest challenge we ran into was designing the power supply circuitry, as the combined load of the sensor DAQ package exceeds amp limits on the MCU. This took a few tries (and smoked components) to get right. The rest of the build went fairly smoothly, with the other main pain points being the calibration and stabilization of the IMU readings (this simply necessitated more trials) and the complex folding of the harnessing, which took many hours to arrange into its final shape. ## Accomplishments that we're proud of We're proud to find a good solution to balance the sensibility of the sensors. We're also proud of integrating all the parts together, supplying them with appropriate power, and assembling the final product as small as possible all in one day. ## What we learned Power was the largest challenge, both in terms of the electrical engineering, and the product design- ensuring that enough power can be supplied for long enough, while not compromising on the wearability of the product, as it is designed to be a versatile solution for many different shoes. Currently the design has a 3 hour battery life, and is easily rechargeable through a pair of front ports. The challenges with the power system really taught us firsthand how picking the right power source for a product can determine its usability. We were also forced to consider hard questions about our product, such as if there was really a need for such a solution, and what kind of form factor would be needed for a real impact to be made. Likely the biggest thing we learned from our hackathon project was the importance of the end user, and of the impact that engineering decisions have on the daily life of people who use your solution. For example, one of our primary goals was making our solution modular and affordable. Solutions in this space already exist, but their high price and uni-functional design mean that they are unable to have the impact they could. Our modular design hopes to allow for greater flexibility, acting as a more general tool for situational awareness. ## What's next for Smart Shoe Module Our original idea was to use a combination of miniaturized LiDAR and ultrasound, so our next steps would likely involve the integration of these higher quality sensors, as well as a switch to custom PCBs, allowing for a much more compact sensing package, which could better fit into the sleek, usable clip on design our group envisions. Additional features might include the use of different vibration modes to signal directional obstacles and paths, and indeed expanding our group's concept of modular assistive devices to other solution types. We would also look forward to making a more professional demo video Current example clip of the prototype module taking measurements:(<https://youtube.com/shorts/ECUF5daD5pU?feature=share>)
## Inspiration 1.3 billion People have some sort of vision impairment. They face difficulties in simple day to day task like reading, recognizing faces, objects, etc. Despite the huge number surprisingly there are only a few devices in the market to aid them, which can be hard on the pocket (5000$ - 10000$!!). These devices essentially just magnify the images and only help those with mild to moderate impairment. There is no product in circulation for those who are completely blind. ## What it does The Third Eye brings a plethora of features at just 5% the cost. We set our minds to come up with a device that provides much more than just a sense of sight and most importantly is affordable to all. We see this product as an edge cutting technology for futuristic development of Assistive technologies. ## Feature List **Guidance** - ***Uses haptic feedback to navigate the user to the room they choose avoiding all obstacles***. In fact, it's a soothing robotic head massage guiding you through the obstacles around you. Believe me, you're going to love it. **Home Automation** - ***Provides full control over all house appliances***. With our device, you can just call out **Alexa** or even using the mobile app and tell it to switch off those appliances directly from the bed now. **Face Recognition** - Recognize friends (even their emotions (P.S: Thanks to **Cloud Vision's** accurate facial recognitions!). Found someone new? Don't worry, on your command, we register their face in our database to ensure the next meeting is no more anonymous and awkward! **Event Description** - ***Describes the activity taking place***. A group of people waving somewhere and you still not sure what's going on next to you? Fear not, we have made this device very much alive as this specific feature gives speech feedback describing the scenic beauty around you. [Thanks to **Microsoft Azure API**] **Read Up** - You don't need to spend some extra bucks for blind based products like braille devices. Whether it be general printed text or a handwritten note. With the help of **Google Cloud Vision**, we got you covered from both ends. **Read up** not only decodes the text from the image but using **Google text to speech**, we also convert the decoded data into a speech so that the blind person won't face any difficulty reading any kind of books or notes they want. **Object Locator** - Okay, so whether we are blind or not, we all have this bad habit of misplacing things. Even with the two eyes, sometimes it's too much pain to find the misplaced things in our rooms. And so, we have added the feature of locating most generic objects within the camera frame with its approximate location. You can either ask for a specific object which you're looking for or just get the feedback of all the objects **Google Cloud Vision** has found for you. **Text-a-Friend** - In the world full of virtuality and social media, we can be pushed back if we don't have access to the fully connected online world. Typing could be difficult at times if you have vision issues and so using **Twilio API** now you can easily send text messages to saved contacts. **SOS** - Okay, so I am in an emergency, but I can't find and trigger the SOS feature!? Again, thanks to the **Twilio** messaging and phone call services, with the help of our image and sensor data, now any blind person can ***Quickly intimate the authorities of the emergency along with their GPS location***. (This includes auto-detection of hazards too) **EZ Shoppe** - It's not an easy job for a blind person to access ATMs or perform monetary transactions independently. And so, taking this into consideration, with the help of superbly designed **Capital One Hackathon API**, we have created a **server-based blockchain** transaction system which adds ease to your shopping without being worried about anything. Currently, the server integrated module supports **customer addition, account addition, person to person transactions, merchant transactions, balance check and info, withdrawals and secure payment to vendors**. No need of worrying about individual items, just with one QR scan, your entire shopping list is generated along with the vendor information and the total billing amount. **What's up Doc** - Monitoring heart pulse rate and using online datasets, we devised a machine learning algorithm and classified labels which tells about the person's health. These labels include: "Athletic", "Excellent", 'Good", "Above Average", "Average", "Below Average" and "Poor". The function takes age, heart rate, and gender as an argument and performs the computation to provide you with the best current condition of your heart pulse rate. \*All features above can be triggered from Phone via voice, Alexa echo dot and even the wearable itself. \*\*Output information is relayed via headphones and Alexa. ## How we built it Retrofit Devices (NodeMCU) fit behind switchboards and allow them to be controlled remotely. The **RSSI guidance uses Wi-Fi signal intensity** to triangulate its position. Ultrasonic sensor and camera detects obstacles (**OpenCV**) and runs Left and Right haptic motors according to closeness to device and position of the obstacle. We used **dlib computer vision library** to record and extract features to perform **facial recognition**. **Microsoft Azure Cloud services** takes a series of images to describe the activity taking place. We used **Optical Character Recognition (Google Cloud)** for Text To Speech Output. We used **Google Cloud Vision** which classifies and locates the object. **Twilio API** sends the alert using GPS from Phone when a hazard is detected by the **Google Cloud Vision API**. QR Scanner scans the QR Code and uses **Capital One API** to make secure and fast transactions in a **BlockChain Network**. Pulse Sensor data is taken and sent to the server where it is analysed using ML models from **AWS SageMaker** to make the health predictions. ## Challenges we ran into Making individual modules was a bit easier but integrating them all together into one hardware (Raspberry Pi) and getting them to work was something really challenging to us. ## Accomplishments that we're proud of The number of features we successfully integrated to prototype level. ## What we learned We learned to trust in ourselves and our teammates and that when we do that there's nothing we can't accomplish. ## What's next for The Third Eye Adding a personal assistant to up the game and so much more. Every person has potential they deserve to unleash; we pledge to level the playfield by taking this initiative forward and strongly urge you to help us in this undertaking.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
partial
## Inspiration “**Social media sucks these days.**” — These were the first few words we heard from one of the speakers at the opening ceremony, and they struck a chord with us. I’ve never genuinely felt good while being on my phone, and like many others I started viewing social media as nothing more than a source of distraction from my real life and the things I really cared about. In December 2019, I deleted my accounts on Facebook, Instagram, Snapchat, and WhatsApp. For the first few months — I honestly felt great. I got work done, focused on my small but valuable social circle, and didn’t spend hours on my phone. But one year into my social media detox, I realized that **something substantial was still missing.** I had personal goals, routines, and daily checklists of what I did and what I needed to do — but I wasn’t talking about them. By not having social media I bypassed superficial and addictive content, but I was also entirely disconnected from my network of friends and acquaintances. Almost no one knew what I was up to, and I didn’t know what anyone was up to either. A part of me longed for a level of social interaction more sophisticated than Gmail, but I didn’t want to go back to the forms of social media I had escaped from. One of the key aspects of being human is **personal growth and development** — having a set of values and living them out consistently. Especially in the age of excess content and the disorder of its partly-consumed debris, more people are craving a sense of **routine, orientation, and purpose** in their lives. But it’s undeniable that **humans are social animals** — we also crave **social interaction, entertainment, and being up-to-date with new trends.** Our team’s problem with current social media is its attention-based reward system. Most platforms reward users based on numeric values of attention, through measures such as likes, comments and followers. Because of this reward system, people are inclined to create more appealing, artificial, and addictive content. This has led to some of the things we hate about social media today — **addictive and superficial content, and the scarcity of genuine interactions with people in the network.** This leads to a **backward-looking user-experience** in social media. The person in the 1080x1080 square post is an ephemeral and limited representation of who the person really is. Once the ‘post’ button has been pressed, the post immediately becomes an invitation for users to trap themselves in the past — to feel dopamine boosts from likes and comments that have been designed to make them addicted to the platform and waste more time, ultimately **distorting users’ perception of themselves, and discouraging their personal growth outside of social media.** In essence — We define the question of reinventing social media as the following: *“How can social media align personal growth and development with meaningful content and genuine interaction among users?”* **Our answer is High Resolution — a social media platform that orients people’s lives toward an overarching purpose and connects them with liked-minded, goal-oriented people.** The platform seeks to do the following: **1. Motivate users to visualize and consistently achieve healthy resolutions for personal growth** **2. Promote genuine social interaction through the pursuit of shared interests and values** **3. Allow users to see themselves and others for who they really are and want to be, through natural, progress-inspired content** ## What it does The following are the functionalities of High Resolution (so far!): After Log in or Sign Up: **1. Create Resolution** * Name your resolution, whether it be Learning Advanced Korean, or Spending More Time with Family. * Set an end date to the resolution — i.e. December 31, 2022 * Set intervals that you want to commit to this goal for (Daily / Weekly / Monthly) **2. Profile Page** * Ongoing Resolutions + Ongoing resolutions and level of progress + Clicking on a resolution opens up the timeline of that resolution, containing all relevant posts and intervals + Option to create a new resolution, or ‘Discover’ resolutions * ‘Discover’ Page + Explore other users’ resolutions, that you may be interested in + Clicking on a resolution opens up the timeline of that resolution, allowing you to view the user’s past posts and progress for that particular resolution and be inspired and motivated! + Clicking on a user takes you to that person’s profile * Past Resolutions + Past resolutions and level of completion + Resolutions can either be fully completed or partly completed + Clicking on a past resolution opens up the timeline of that resolution, containing all relevant posts and intervals **3. Search Bar** * Search for and navigate to other users’ profiles! **4. Sentiment Analysis based on IBM Watson to warn against highly negative or destructive content** * Two functions for sentiment analysis textual data on platform: * One function to analyze the overall positivity/negativity of the text * Another function to analyze the user of the amount of joy, sadness, anger and disgust * When the user tries to create a resolution that seems to be triggered by negativity, sadness, fear or anger, we show them a gentle alert that this may not be best for them, and ask if they would like to receive some support. * In the future, we can further implement this feature to do the same for comments on posts. * This particular functionality has been demo'ed in the video, during the new resolution creation. * **There are two purposes for this functionality**: * a) We want all our members to feel that they are in a safe space, and while they are free to express themselves freely, we also want to make sure that their verbal actions do not pose a threat to themselves or to others. * b) Current social media has shown to be a propagator of hate speech leading to violent attacks in real life. One prime example are the Easter Attacks that took place in Sri Lanka exactly a year ago: <https://www.bbc.com/news/technology-48022530> * If social media had a mechanism to prevent such speech from being rampant, the possibility of such incidents occurring could have been reduced. * Our aim is not to police speech, but rather to make people more aware of the impact of their words, and in doing so also try to provide resources or guidance to help people with emotional stress that they might be feeling on a day-to-day basis. * We believe that education at the grassroots level through social media will have an impact on elevating the overall wellbeing of society. ## How we built it Our tech stack primarily consisted of React (with Material UI), Firebase and IBM Watson APIs. For the purpose of this project, we opted to use the full functionality of Firebase to handle the vast majority of functionality that would typically be done on a classic backend service built with NodeJS, etc. We also used Figma to prototype the platform, while IBM Watson was used for its Natural Language toolkits, in order to evaluate sentiment and emotion. ## Challenges we ran into A bulk of the challenges we encountered had to do with React Hooks. A lot of us were only familiar with an older version of React that opted for class components instead of functional components, so getting used to Hooks took a bit of time. Another issue that arose was pulling data from our Firebase datastore. Again, this was a result of lack of experience with serverless architecture, but we were able to pull through in the end. ## Accomplishments that we're proud of We’re really happy that we were able to implement most of the functionality that we set out to when we first envisioned this idea. We admit that we might have bit a lot more than we could chew as we set out to recreate an entire social platform in a short amount of time, but we believe that the proof of concept is demonstrated through our demo ## What we learned Through research and long contemplation on social media, we learned a lot about the shortcomings of modern social media platforms, for instance how they facilitate unhealthy addictive mechanisms that limit personal growth and genuine social connection, as well as how they have failed in various cases of social tragedies and hate speech. With that in mind, we set out to build a platform that could be on the forefront of a new form of social media. From a technical standpoint, we learned a ton about how Firebase works, and we were quite amazed at how well we were able to work with it without a traditional backend. ## What's next for High Resolution One of the first things that we’d like to implement next, is the ‘Group Resolution’ functionality. As of now, users browse through the platform, find and connect with liked-minded people pursuing similarly-themed interests. We think it would be interesting to allow users to create and pursue group resolutions with other users, to form more closely-knitted and supportive communities with people who are actively communicating and working towards achieving the same resolution. We would also like to develop a sophisticated algorithm to tailor the users’ ‘Discover’ page, so that the shown content is relevant to their past resolutions. For instance, if the user has completed goals such as ‘Wake Up at 5:00AM’, and ‘Eat breakfast everyday’, we would recommend resolutions like ‘Morning jog’ on the discover page. By recommending content and resolutions based on past successful resolutions, we would motivate users to move onto the next step. In the case that a certain resolution was recommended because a user failed to complete a past resolution, we would be able to motivate them to pursue similar resolutions based on what we think is the direction the user wants to head towards. We also think that High Resolution could be potentially become a platform for recruiters to spot dedicated and hardworking talent, through the visualization of users’ motivation, consistency, and progress. Recruiters may also be able to user the platform to communicate with users and host online workshops or events . WIth more classes and educational content transitioning online, we think the platform could serve as a host for online lessons and bootcamps for users interested in various topics such as coding, music, gaming, art, and languages, as we envision our platform being highly compatible with existing online educational platforms such as Udemy, Leetcode, KhanAcademy, Duolingo, etc. The overarching theme of High Resolution is **motivation, consistency, and growth.** We believe that having a user base that adheres passionately to these themes will open to new opportunities and both individual and collective growth.
OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES) ## Inspiration As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain. After doing research, we noticed that we were not alone: A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively) <https://buffer.com/state-of-remote-work-2019> We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces. As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly. Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity). ## What it does Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day. ## How We built it This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room. In total, we used **4 APIs/ tools** for our project. * Twilio chat room API * Autocode API * Slack API for the Slack bots * Microsoft Azure to work on the machine learning algorithm When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes. Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include: 1. Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers. 2. Hierarchical clustering was quite easy to implement as well in a Spyder notebook. 3. the dendrogram produced was very intuitive and helped me understand the data in a holistic way The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information. This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements. After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as: 1. Isolation 2. Lack of feedback From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post. Technical implementation for AutoCode: Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before: Autocode source code here: <https://autocode.com/src/mathurahravigulan/remotework/> **Creating the slackbot** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) if (context.params.StatusCallbackEvent === 'room-created') { await lib.slack.channels['@0.7.2'].messages.create({ channel: `#buddychannel`, text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla` }); } // do something let result = {}; // **THIS IS A STAGED FILE** // It was created as part of your onboarding experience. // It can be closed and the project you're working on // can be returned to safely - or you can play with it! result.message = `Welcome to Autocode! 😊`; return result; }; ``` **Connecting Twilio to autocode** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); const twilio = require('twilio'); const AccessToken = twilio.jwt.AccessToken; const { VideoGrant } = AccessToken; const generateToken =() => { return new AccessToken( process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_API_KEY, process.env.TWILIO_API_SECRET ); }; const videoToken = (identity, room) => { let videoGrant; if (typeof room !== 'undefined') { videoGrant = new VideoGrant({ room }); } else { videoGrant = new VideoGrant(); } const token = generateToken(); token.addGrant(videoGrant); token.identity = identity; return token; }; /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) const identity = context.params.identity; const room = context.params.room; const token = videoToken(identity, room); return { token: token.toJwt() } }; ``` From the product design perspective, it is possible to explain certain design choices: <https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1> 1. As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order. 2. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site. 3. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company) In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs. The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website. ## Challenges We ran into Some challenges that we ran into were: * Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing. * React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development. + Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation. ## Accomplishments that We're proud of * Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance) * Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com: StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers. <https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/> * From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable! * Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning * Creating all of our icons/illustrations from scratch using Figma! ## What We learned * How to create and trigger Slack bots from React * How to have a live video chat on a web application using Twilio and React hooks * How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria * How to work remotely in a virtual hackathon, and what tools would help us work remotely! ## What's next for aibo * We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected. * We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user. * It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information. * Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details. * Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees.
## Inspiration As students who live in student housing, we realized there are a lot of things that needs to be organized around the house. Assigning chores, creating reminder notes and drafting an availability calendar can difficult to balance when accommodating the busy lives of university students. We decided to put all these processes together to create a central system that is easy to access at anytime! ## What it does You simply start by creating an account and a household. The rest of your housemates can then register and join that household using a unique invite code. From there you can create a list of tasks to be done around the house and assign housemates to each. You can also write to a messages board that is shared to the entire household. ## Challenges we ran into We ran into a lot of trouble with our database structure and accessing its related data, but once we spent enough time to understand how it interacted with our application, it ended up becoming our most notable feature. ## Accomplishments that we're proud of One of our biggest accomplishments as a team was overcoming the difficulties of the database. Once the structure was created in a way that the application could use it effectively, it simplified the process of creating messages, tasks, and houses. This allowed us to add more features than we were expecting. We are also proud of our ability to work as a team and motivate each other. We divided the work among our team to play off each other's strengths and continued to remain focus from beginning to end. ## What we learned We learned about software development using django and dealing with a postgresql database. None of us had used these technologies in the past, and using them at DeltaHacks gave us a good understanding of how the framework works. ## What's next for Student House Manager We want to continue to add a more efficient method to insert calendars. We have a few ideas on how to merge schedules and display different class times for each person in the house using google calendars. Other ideas included auto email notifications for chore deadlines, chore rotation for specified weekly tasks and a way to keep track of total house productivity. This would help develop the productivity side of our application to an even higher level
partial
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
**In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.** ## Inspiration Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief. ## What it does **Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter. ## How we built it We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community. ## Challenges we ran into Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon. ## Accomplishments that we're proud of We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives. ## What we learned We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs. ## What's next for Stronger Together We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations.
## Inspiration Homelessness, food insecurity & mental health problems are on the rise in our post Covid 19 world. Community shelters in major cities do not have the resources or vacancies to accommodate the growing number of vulnerable populations. Addressing these issues and seeking to mitigate the fallout before they become full blown crises, the most prudent course of action. To assist with this present and looming crisis, we have developed the **Our Home** app. Our goal is to connect the shelters and the members of our communities together. We are involved in the fight against homelessness, food insecurity and mental health issues and other social problems. This app is our way to make sure members of our community have a shelter above them, food to eat and access to basic help as the need arises, and support each other within their own means. ## What it does Our Home app connects local charities to community members that wish to open up their homes and hearts and offer assistance to other members of the community who may be experiencing homelessness or hunger. Members of the community who are willing to offer a place to sleep, a meal, a place to shower, a ride, etc. can offer their services up to local charities who are normally operating at max capacity. This allows any overflow in service that needs to be diverted by the charities to members of the community who can directly assist those that need the help. Charities can request members who sign up to provide someone a place to sleep, a hot meal, etc if the member accepts the call to help. The app also allows members of a community to make donations to their local charities at the simple click of a button with minimal effort. They can easily visualize all the charity and help centers that offer help to the needy near them and easily find out which charity needs what kind of help. ## How we built it Frontend: Flutter (mobile app), HTML/JS (web page); Backend: Node.JS; DB: Firebase Realtime Database; Hosting & Storage: Google Cloud Firebase; API: Ezri ArcGIS, Checkbook, @Protocol; Design: Wireframe, Miro, Canva ## Challenges we ran into Getting Flutter in Android studio to work correctly with respect to the @ Company’s Protocol proved to be quite cumbersome, but we eventually got things to work as intended. The ArcGIS API works more smoothly for JavaScript and web development in general, so we made the switch from using Python to JavaScript. ## Accomplishments that we're proud of We built a simple way to overlay many layers of information on a map to help visualize locations of places where charitable work is being done. A way to visualize the location of users who are willing to help people in need. All using the arcGIS mapping which We built a way to integrate this with the checkbook api to allow people to make donations directly and easily to local charities of their choosing. We built a mobile app that allows us to navigate the main desired functionality of users and charities interacting with each other to help out people within the community who need a helping hand. ## What we learned Use Miro for Agile Development; Host static website on Firebase; Use ArcGIS API for Javascript; Using the @Protocol for decentralized and secure communication; Using the CheckBook API to easily send / receive money over email ## What's next for Our Home We wish to refine some front end user interface components, improve the UX/UI experience and deploy, to start testing how people will actually utilize the app. Have a matching algorithm to match the homeowners with the homeless from their charity of choice.
winning
## What it does Tickets is a secure, affordable, and painless system for registration and organization of in-person events. It utilizes public key cryptography to ensure the identity of visitors, while staying affordable for organizers, with no extra equipment or cost other than a cellphone. Additionally, it provides an easy method of requesting waiver and form signatures through Docusign. ## How we built it We used Bluetooth Low Energy in order to provide easy communication between devices, PGP in order to verify the identities of both parties involved, and a variety of technologies, including Vue.js, MongoDB Stitch, and Bulma to make the final product. ## Challenges we ran into We tried working (and struggling) with NFC and other wireless technologies before settling on Bluetooth LE as the best option for our use case. We also spent a lot of time getting familiar with MongoDB Stitch and the Docusign API. ## Accomplishments that we're proud of We're proud of successfully creating a polished and functional product in a short period of time. ## What we learned This was our first time using MongoDB Stitch, as well as Bluetooth Low Energy. ## What's next for Tickets An option to allow for payments for events, as well as more input formats and data collection.
## Inspiration There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability. The concept of a Digital Identity is extremely powerful. In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations. Digital identities are also the foundation of our future, enabling: * P2P Lending * Fractional Home Ownership * Selling Energy Back to the Grid * Fan Sharing Revenue * Monetizing data * bringing the unbanked, banked. ## What it does Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network! It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit. From there, users can get access to a bank account. ## How we built it Front End: | HTML | CSS | JS APIs: AWS Rekognize | AWS S3 | Firebase Back End: Node JS | mvn Crypto: Ethereum ## Challenges we ran into Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side. ## Accomplishments that we're proud of Building an application that can better the lives people!! ## What we learned Blockchain, facial verification using AWS, databases ## What's next for CredID Expand on our idea.
## Inspiration We wanted to bring Augmented Reality technologies to an unexpecting space; challenging ourselves to think outside of the box. We were looking for somewhere where user experience could be dramatically improved. ## What it does Our AR mobile application recognizes DocuSigns QR codes and allows you to either sign up directly or generate an automated signature without ever leaving your phone. ## How we built it We built it with our awesome brains and ## Challenges we ran into Implementing the given API and other back-end technologies to actually authenticate and submit the process. We ran into challenges when trying to integrate the digital world with the technical world. There was not much documentation online when it came to merging the two platforms. We also ran into challenges with image recognition of the QR code because AR depends on the environment and lighting. ## Accomplishments that we're proud of We got a MVP out of the challenge, we did a lot of collaboration and brainstorming which sparked amazing ideas, we spoke to every sponsor to learn about their company and challenges ## What we learned API's with little documentation and trying to integrate with new technologies can be very challenging. Pay attention to details because it's the small details that will cost you hours of frustration. Through further research we learned about the legalities with digital signatures which sometimes can be a pain point for companies who use eSign like DocuSign. ## What's next for Project 1 AM To Present to all judges and hopefully the idea gets bought in and implemented to make customer's lives easier
winning
# Pitch Every time you throw trash in the recycling, you either spoil an entire bin of recyclables, or city workers and multi-million dollar machines separate the trash out for you. We want to create a much more efficient way to sort garbage that also trains people to sort correctly and provides meaningful data on sorting statistics. Our technology uses image recognition to identify the waste and opens the lid of the correct bin. When the image recognizer does not recognize the item, it opens all bins and trusts the user to deposit it. It also records the number of times a lid has been opened to estimate what and how much is in each bin. The statistics would have many applications. Since we display the proportion of all garbage in each bin, it will motivate people to compost and recycle more. It will also allow cities to recognize when a bin is full based on how much it has collected, allowing garbage trucks to optimize their routes. In addition, information about what items are commonly thrown into the trash would be useful to material engineers who can design recyclable versions of those products. Future improvements include improved speed and reliability, IOTA blockchain integration, facial recognition for personalized statistics, and automatic self-learning. # How it works 1. Raspberry Pi uses webcam and opencv to look for objects 2. When an object is detected the pi sends the image to the server 3. Server sends image to cloud image recognition services (Amazon Rekognition & Microsoft Azure) and determines which bin should be open 4. Server stores information and statistics in a database 5. Raspberry Pi gets response back from server and moves appropriate bin
## Inspiration McMaster's SRA presidential debate brought to light the issue of garbage sorting on campus. Many recycling bins were contaminated and were subsequently thrown into a landfill. During the project's development, we became aware of the many applications of this technology, including sorting raw materials, and manufacturing parts. ## What it does The program takes a customizable trained deep learning model that can categorize over 1000 different classes of objects. When an object is placed in the foreground of the camera, its material is determined and its corresponding indicator light flashes. This is to replicate a small-scale automated sorting machine. ## How we built it To begin, we studied relevant modules of the OpenCV library and explored ways to implement them for our specific project. We also determined specific categories/materials for different classes of objects to build our own library for sorting. ## Challenges we ran into Due to time constraints, we were unable to train our own data set for the specific objects we wanted. Many pre-trained models are designed to run on much stronger hardware than a raspberry pi. Being limited to pre-trained databases added a level of difficulty for the software to detect our specific objects. ## Accomplishments that we're proud of The project actually worked and was surprisingly better than we had anticipated. We are proud that we were able to find a compromise in the pre-trained model and still have a functioning application. ## What we learned We learned how to use OpenCV for this application, and the many applications of this technology in the deep learning and IoT industry. ## What's next for Smart Materials Sort We'd love to find a way to dynamically update the training model (supervised learning), and try the software with our own custom models.
## 💡 Inspiration 💯 Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you? ## ♻️ What it does 🌱 IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website. ## 🛠️ How we built it 💬 Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items. Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords. Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into. ## 🚧 Challenges we ran into ⛔ * Connecting the Arduino to the arms * Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees * Using global variables on our website * Configuring MongoDB to store user data * Figuring out how and when to detect the type of trash on the screen ## 🎉 Accomplishments that we're proud of 🏆 In a short span of 24 hours, we are proud to: * Successfully engineer and program a servo arm to sort trash into two separate bins * Connect and program LED lights that change colors varying on recyclable or non-recyclable trash * Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not * Develop an intuitive website with React.js that includes login, user profile, and informative capabilities * Drink a total of 9 cans of Monsters combined (the cans were recycled) ## 🧠 What we learned 🤓 * How to program in C++ * How to control servo arms at certain degrees with an Arduino * How to parse and understand Google Cloud Vision API outputs * How to connect a MongoDB database to create user authentification * How to use global state variables in Node.js and React.js * What types of items are recyclable ## 🌳 Importance of Recycling 🍀 * Conserves natural resources by reusing materials * Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions * Reduces the amount of waste sent to landfills, * Decreasesdisruption to ecosystems and habitats ## 👍How Intellibin helps 👌 **Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams. **Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates. **User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts. **In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology. ## 🔮 What's next for Intellibin⏭️ The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics!
partial
## Inspiration A deep and unreasonable love of xylophones ## What it does An air xylophone right in your browser! Play such classic songs as twinkle twinkle little star, ba ba rainbow sheep and the alphabet song or come up with the next club banger in free play. We also added an air guitar mode where you can play any classic 4 chord song such as Wonderwall ## How we built it We built a static website using React which utilised Posenet from TensorflowJS to track the users hand positions and translate these to specific xylophone keys. We then extended this by creating Xylophone Hero, a fun game that lets you play your favourite tunes without requiring any physical instruments. ## Challenges we ran into Fine tuning the machine learning model to provide a good balance of speed and accuracy ## Accomplishments that we're proud of I can get 100% on Never Gonna Give You Up on XylophoneHero (I've practised since the video) ## What we learned We learnt about fine tuning neural nets to achieve maximum performance for real time rendering in the browser. ## What's next for XylophoneHero We would like to: * Add further instruments including a ~~guitar~~ and drum set in both freeplay and hero modes * Allow for dynamic tuning of Posenet based on individual hardware configurations * Add new and exciting songs to Xylophone * Add a multiplayer jam mode
## Bringing your music to life, not just to your ears but to your eyes 🎶 ## Inspiration 🍐 Composing music through scribbling notes or drag-and-dropping from MuseScore couldn't be more tedious. As pianists ourselves, we know the struggle of trying to bring our impromptu improvisation sessions to life without forgetting what we just played or having to record ourselves and write out the notes one by one. ## What it does 🎹 Introducing PearPiano, a cute little pear that helps you pair the notes to your thoughts. As a musician's best friend, Pear guides pianists through an augmented simulation of a piano where played notes are directly translated into a recording and stored for future use. Pear can read both single notes and chords played on the virtual piano, allowing playback of your music with cascading tiles for full immersion. Seek musical guidance from Pear by asking, "What is the key signature of C-major?" or "Tell me the notes of the E-major diminished 7th chord." To fine tune your compositions, use "Edit mode," where musicians can rewind the clip and drag-and-drop notes for instant changes. ## How we built it 🔧 Using Unity Game Engine and the Oculus Quest, musicians can airplay their music on an augmented piano for real-time music composition. We used OpenAI's Whisper for voice dictation and C# for all game-development scripts. The AR environment is entirely designed and generated using the Unity UI Toolkit, allowing our engineers to realize an immersive yet functional musical corner. ## Challenges we ran into 🏁 * Calibrating and configuring hand tracking on the Oculus Quest * Reducing positional offset when making contact with the virtual piano keys * Building the piano in Unity: setting the pitch of the notes and being able to play multiple at once ## Accomplishments that we're proud of 🌟 * Bringing a scaled **AR piano** to life with close-to-perfect functionalities * Working with OpenAI to synthesize text from speech to provide guidance for users * Designing an interactive and aesthetic UI/UX with cascading tiles upon recording playback ## What we learned 📖 * Designing and implementing our character/piano/interface in 3D * Emily had 5 cups of coffee in half a day and is somehow alive ## What's next for PearPiano 📈 * VR overlay feature to attach the augmented piano to a real one, enriching each practice or composition session * A rhythm checker to support an aspiring pianist to stay on-beat and in-tune * A smart chord suggester to streamline harmonization and enhance the composition process * Depth detection for each note-press to provide feedback on the pianist's musical dynamics * With the up-coming release of Apple Vision Pro and Meta Quest 3, full colour AR pass-through will be more accessible than ever — Pear piano will "pair" great with all those headsets!
## Inspiration ``` We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do. ``` ## What it does ``` Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams. ``` ## How we built it ``` We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application. ``` ## Challenges we ran into ``` This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application! ``` ## What we learned ``` We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers. ``` ## What's next for Discotheque ``` If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music. ```
winning
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
## Inspiration Have you ever had to wait in long lines just to buy a few items from a store? Not wanted to interact with employees to get what you want? Now you can buy items quickly and hassle free through your phone, without interacting with any people whatsoever. ## What it does CheckMeOut is an iOS application that allows users to buy an item that has been 'locked' in a store. For example, clothing that have the sensors attached to them or items that are physically locked behind glass. Users can scan a QR code or use ApplePay to quickly access the information about an item (price, description, etc.) and 'unlock' the item by paying for it. The user will not have to interact with any store clerks or wait in line to buy the item. ## How we built it We used xcode to build the iOS application, and MS Azure to host our backend. We used an intel Edison board to help simulate our 'locking' of an item. ## Challenges I ran into We're using many technologies that our team is unfamiliar with, namely Swift and Azure. ## What I learned I've learned not underestimate things you don't know, to ask for help when you need it, and to just have a good time. ## What's next for CheckMeOut Hope to see it more polished in the future.
## Inspiration We all love learning languages, but one of the most frustrating things is seeing an object that you don't know the word for and then trying to figure out how to describe it in your target language. Being students of Japanese, it is especially frustrating to find the exact characters to describe an object that you see. With this app, we want to change all that. Huge advances have been made in computer vision in recent years that have allowed us to accurately detect all kinds of different image matter. Combined with advanced translation software, we found the perfect recipe to make an app that could capitalize upon these technologies and help foreign languages students all around the world. ## What it does The app allows you to either take a picture of an object or scene with your iPhone camera or upload an image from your photo library. You then select a language that would like to translate words into. The app then remotely contacts the Microsoft Azure Cognitive Services using an HTTP request from within the app to create tags from the image you uploaded. These tags are then uploaded to the Google Cloud Platform services to translate those tags into your target language. After doing this, a list of english-foreign language word pairs is displayed, relating to the image tags. ## How we built it The app was built using Xcode and was coded in Swift. We split up to work on different parts of the project. Kent worked on interfacing with Microsoft's computer vision AI and created the basic app structure. Isaiah worked on setting up Google Cloud Platform translation and contributed to adding functionality for multiple languages. Ivan worked on designing the logo for the app and most of the visuals. ## Challenges we ran into A lot of time was spent figuring out how to deal with HTTP requests and json, two things none of us have much experience with, and then using them in swift to contact remote services through our app. After this major hurdle was overcome, there was a concurrency issue as both the vision AI and translation requests were designed to be run in parallel to the main thread of the app's execution, however this created some problems for updating the app's UI. We ended up fixing all the issues though! ## Accomplishments that we're proud of We are very proud that we managed to utilize some really awesome cloud services like Microsoft Azure's Cognitive Services and Google Cloud Platform, and are happy that we managed to create an app that worked at the end of the day! ## What we learned This was a great learning experience for all of us, both in terms of the technical skills we acquired in connecting to cloud services and in terms of the teamwork skills we acquired. ## What's next for Literal Firstly, we would add more languages to translate and make much cleaner UI. Then we would enable it to run on cloud services indefinitely instead of just on a temporary treehacks-based license. After that, there are many more cool ideas that we could implement into the app!
winning
## Inspiration We are both the type to see the world through numbers. Throughout the pandemic, articles would consistently be released touting the rising case numbers and with catchy headlines would off little in the way of data. We were both afraid but didn't know how worried we should be. Personally, I (Ben) looked for answers in the statistics released by the government. I was pleasantly surprised to see the data available was pretty detailed and specifically interested to see the data available for regions. Living in Toronto I based some of my choices around this data and knew places to avoid. If I went for a walk 4 km east I was in one of the most dangerous regions but sticking to the west was significantly safer. Together we decided this data should be made accessible so everyone can use it, so we created a project to do just that. Even if you aren't as neurotic as I am, I am sure anyone can find value from knowing the data. ## What it does Our application uses publically available data that gets updated weekly or bi-weekly that each municipal government releases (pretty hard to find for the average Joe) and makes it accessible to everyone. Currently with a focus in Ottawa, our program lets, any user view a heat map that visually shows the most dangerous administrative regions that you should avoid (I now know more about the administrative regions of Ottawa than any student should). The heat map is weighted based on cases in the last 14 days per 100000 people. The user can select any marker that signifies each region and click it to see the number of cases reported in the last 14 days. Lastly, any user can add a marker on the map to see the exposure (risk) associated with going to that location. If the exposure is high where you plan to go then maybe you should rethink that trip. The exposure is based on the number of active cases in nearby regions. ## How we built it We like to joke that we started the AWS session and never stopped it. In fact, the entire frontend is built in the react project set up from that session. On the frontend side we used a react based frontend, hosted by AWS. We import the Map component and associated resources from google-maps-react. Then we built a custom map component. Some data for the frontend is provided by the backend specifically the exposure score. The front react.js framework initiates a HTTP GET request to the backend server deployed on Heroku. The deployed server consists of a simple python webserver API built with Flask. The GET request provides the backend API methods with longitude and latitude coordinates so the backend processing endpoint can return an associated risk of exposure score related to the current longitude and latitudes. The risk score algorithm takes a weighted average of the N closest locations' exposure rates and returns a JSON response to the react frontend. ## Challenges we ran into We both had no experience with the tools we were using. On the frontend side react was completely new, we had minimal experience with javascript and we had never used AWS. On the backend side... Another challenge was getting the project to a place we were happy with. Especially with the frontend, projects like this can just grow so quickly, there are tons of features you always want to add. At some point, you just need to say there are 5 hours left, lets take what we have, clean it and present it nicely. Lastly getting past CORs policy to make calls from the frontend to the backend. We are a two-person group so there was an obvious backend, frontend split. Sadly we never thought that getting data from one to another would take more than 30 minutes and 2 lines of code. We learned the hard way. ## Accomplishments that we're proud of We are super proud we exclusively used tools we weren't familiar with. These past days have been equal parts learning and programming, we really left our comfort zone with this one. Yes, we are at a competition but we agreed it was more important we learn. Could we have made something impressive significantly faster and get much more sleep than we did? Maybe, but we wouldn't have learned nearly as much. We are also really happy to have made a full-stack application. At the end of the day, we could have kept all the data used in the frontend files and referenced it without a backend. If we did that though this program would have no future. Eventually, most programs need a backend especially ones working with data. The python based backend can serve to fetch the new coronavirus numbers for us when it is updated. If we expand the system for new municipalities, it will eventually be too much to just hold in the front end. ## What we learned We learned a bit about AWS, though I feel like we just touched the surface on that one. I was pleasantly surprised at the continuous integration offered by the amplifying console, I didn't know that was offered and it helped a lot. We definitely learned a ton about react, its structure, and specific components like the ones offered by Google. Where there is react there is Javascript and we learned a lot about javascript as well. From the backend, we learned a lot about how to create a python webserver API with Flask from scratch. In addition, we learned how to deploy our backend API to a remote server such as Heroku and learned a bit about configuring server resources such as adding on a PostgreSQL database. Overall, we definitely gained a lot of valuable experiences in debugging many different build technologies. ## What's next for safetrek * Expand to new municipalities * Add feature offering risk associated with traveling from one waypoint to a second (risk across path) * Add feature offering alternate safer paths for travel * Migrate more data from frontend files to requests to the backend * Automate data updating to backend * Integrate reverse-geocode, base code is there but never got it to fully work properly
## Inspiration The Google Map API has features that display the crowdedness of locations. But it does not provide an efficient way to explore the alternative trip plans, nor does it take into account regional COVID cases. We hope this tool provides a safe and quick way to minimize COVID exposure for people who need to make essential trips. Despite our best efforts to stay safe, essential travel comes with risk. This includes trips to the grocery store, convenience store trips, and more. When carrying out these everyday tasks, we need a way to stay safe as possible. What better way to do this than proactively? Effective trip planning can massively decrease risk of COVID exposure, so we developed a tool to do this automatically. ## What it does By providing your location and a desired destination, AVOID-19 determines the risk of your entire trip, from the moment you step out of your house to the moment you return. This risk is represented by a risk score, which will be calculated based on the number of people you are expected to encounter on your trip, and the risk of exposure from each encounter. The number of people you are expected to encounter will be calculated using both population density and transit information, alongside how many people are anticipated to be at your destination. The risk of exposure from each encounter will be calculated using the number of active infections in your area and your proximity to known public exposure sites. Taking this risk score into account, AVOID-19 provides alternative destinations if the risk is high, and tips for your travel like times at which your location is least busy. By following this advice, you are able to minimize the risk of your essential travel. ## How we built it Front-end * React.js, Next.js, Vercel Back-end * Firebase * Folium, OSMnx, Google Maps API, BestTime API Data Source * Canadian Census Data: Census Subdivision Boundaries Census Subdivision Population * BC COVID-19 Public Exposures (web-scraping using Python BeautifulSoup) * BC COVID-19 Dashboard (manually collected regional cases) ## Challenges we ran into * Designing the layout from scratch; small layout issues like flexbox took a lot of time * When trying to build the choropleth map we had to download the census data from the Canadian government and struggled for a while to convert the projection that the boundary file uses ## What we learned * Time-management and team-coordination is crucial to the outcome ## What's next * Incorporating more data with finer granularity * Improve the COVID risk calculation to incorporate: cases per capita, transit/location crowdedness, etc. * Integrate an intelligent trip recommender
## Inspiration We hear stories about crimes happening around us everyday. It's especially terrifying since we're students and we just want to enjoy our college experience. While discussing ideas to hack, we realized this problem **resonated with all of us**. We immediately realized that we need to work on this. ## What it does It gives you and your loved ones a chance to take your safety into your own hands. By using our application, you are taking control of the most important thing you have, your life. It generates the safest and shortest path from Location A to Location B. ## How we built it We analyzed large data sets of crime over the past 10 years, picked up representative points after weighing them based on relevance and severity of crime. We then created a data anaytics flow (that can be easily repeated with new data) to glean more useful insights from the raw data. To help analyze the data further we plotted the data on Virtual Reality to look at data spreads and densities. This processed data was than pushed into our MongoDB database hosted on MLab. We then used these generalized clusters to see which areas we should avoid while routing using the Wrld and OSRM API's. From here our app generates a single route off of all the information based on what our algorithm devises to be the "safest route" for the user. This route is then plotted and displayed for the users convenience. ## Challenges we ran into We started out with millions of cells of data, which made it really hard to work with. We had to filter out the most relevant data first and convert the data into a completely new format to optimize it for our path finding algorithm. In addition, we had to optimize our MongoDB to work well with our data as we made frequent queries to a database with originally over a million elements. In, addition, we constantly had issues mixing up latitude and longitude values as well as making HTTP requests for many APIs. ## Accomplishments that we're proud of We are happy that we were able to create a minimum viable product in this short duration. We're especially glad it's not just a "weekend-only" idea and it is something we can continue to make better. This idea is something we hope in the future can be something that truly has a social impact and can actually make the world a safer place. ## What we learned * Wrld/OSRM API * Node/Express/MongoDB -HTTP Requests/Callback functions -Unity VR sdk -Pandas/Numpy * Coordinating team efforts in different parts of the Application ## What's next for SafeWorld Using real time data to better predictions for real life incidents (protests). In addition, having more global data sets would be a optimal next step to get it working in more cities. This would be expedited by our adaptable framework for data generation and pathing.
losing
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
## Inspiration 1.3 billion People have some sort of vision impairment. They face difficulties in simple day to day task like reading, recognizing faces, objects, etc. Despite the huge number surprisingly there are only a few devices in the market to aid them, which can be hard on the pocket (5000$ - 10000$!!). These devices essentially just magnify the images and only help those with mild to moderate impairment. There is no product in circulation for those who are completely blind. ## What it does The Third Eye brings a plethora of features at just 5% the cost. We set our minds to come up with a device that provides much more than just a sense of sight and most importantly is affordable to all. We see this product as an edge cutting technology for futuristic development of Assistive technologies. ## Feature List **Guidance** - ***Uses haptic feedback to navigate the user to the room they choose avoiding all obstacles***. In fact, it's a soothing robotic head massage guiding you through the obstacles around you. Believe me, you're going to love it. **Home Automation** - ***Provides full control over all house appliances***. With our device, you can just call out **Alexa** or even using the mobile app and tell it to switch off those appliances directly from the bed now. **Face Recognition** - Recognize friends (even their emotions (P.S: Thanks to **Cloud Vision's** accurate facial recognitions!). Found someone new? Don't worry, on your command, we register their face in our database to ensure the next meeting is no more anonymous and awkward! **Event Description** - ***Describes the activity taking place***. A group of people waving somewhere and you still not sure what's going on next to you? Fear not, we have made this device very much alive as this specific feature gives speech feedback describing the scenic beauty around you. [Thanks to **Microsoft Azure API**] **Read Up** - You don't need to spend some extra bucks for blind based products like braille devices. Whether it be general printed text or a handwritten note. With the help of **Google Cloud Vision**, we got you covered from both ends. **Read up** not only decodes the text from the image but using **Google text to speech**, we also convert the decoded data into a speech so that the blind person won't face any difficulty reading any kind of books or notes they want. **Object Locator** - Okay, so whether we are blind or not, we all have this bad habit of misplacing things. Even with the two eyes, sometimes it's too much pain to find the misplaced things in our rooms. And so, we have added the feature of locating most generic objects within the camera frame with its approximate location. You can either ask for a specific object which you're looking for or just get the feedback of all the objects **Google Cloud Vision** has found for you. **Text-a-Friend** - In the world full of virtuality and social media, we can be pushed back if we don't have access to the fully connected online world. Typing could be difficult at times if you have vision issues and so using **Twilio API** now you can easily send text messages to saved contacts. **SOS** - Okay, so I am in an emergency, but I can't find and trigger the SOS feature!? Again, thanks to the **Twilio** messaging and phone call services, with the help of our image and sensor data, now any blind person can ***Quickly intimate the authorities of the emergency along with their GPS location***. (This includes auto-detection of hazards too) **EZ Shoppe** - It's not an easy job for a blind person to access ATMs or perform monetary transactions independently. And so, taking this into consideration, with the help of superbly designed **Capital One Hackathon API**, we have created a **server-based blockchain** transaction system which adds ease to your shopping without being worried about anything. Currently, the server integrated module supports **customer addition, account addition, person to person transactions, merchant transactions, balance check and info, withdrawals and secure payment to vendors**. No need of worrying about individual items, just with one QR scan, your entire shopping list is generated along with the vendor information and the total billing amount. **What's up Doc** - Monitoring heart pulse rate and using online datasets, we devised a machine learning algorithm and classified labels which tells about the person's health. These labels include: "Athletic", "Excellent", 'Good", "Above Average", "Average", "Below Average" and "Poor". The function takes age, heart rate, and gender as an argument and performs the computation to provide you with the best current condition of your heart pulse rate. \*All features above can be triggered from Phone via voice, Alexa echo dot and even the wearable itself. \*\*Output information is relayed via headphones and Alexa. ## How we built it Retrofit Devices (NodeMCU) fit behind switchboards and allow them to be controlled remotely. The **RSSI guidance uses Wi-Fi signal intensity** to triangulate its position. Ultrasonic sensor and camera detects obstacles (**OpenCV**) and runs Left and Right haptic motors according to closeness to device and position of the obstacle. We used **dlib computer vision library** to record and extract features to perform **facial recognition**. **Microsoft Azure Cloud services** takes a series of images to describe the activity taking place. We used **Optical Character Recognition (Google Cloud)** for Text To Speech Output. We used **Google Cloud Vision** which classifies and locates the object. **Twilio API** sends the alert using GPS from Phone when a hazard is detected by the **Google Cloud Vision API**. QR Scanner scans the QR Code and uses **Capital One API** to make secure and fast transactions in a **BlockChain Network**. Pulse Sensor data is taken and sent to the server where it is analysed using ML models from **AWS SageMaker** to make the health predictions. ## Challenges we ran into Making individual modules was a bit easier but integrating them all together into one hardware (Raspberry Pi) and getting them to work was something really challenging to us. ## Accomplishments that we're proud of The number of features we successfully integrated to prototype level. ## What we learned We learned to trust in ourselves and our teammates and that when we do that there's nothing we can't accomplish. ## What's next for The Third Eye Adding a personal assistant to up the game and so much more. Every person has potential they deserve to unleash; we pledge to level the playfield by taking this initiative forward and strongly urge you to help us in this undertaking.
winning
We were inspired by the daily struggle of social isolation. Shows the emotion of a text message on Facebook We built this using Javascript, IBM-Watson NLP API, Python https server, and jQuery. Accessing the message string was a lot more challenging than initially anticipated. Finding the correct API for our needs and updating in real time also posed challenges. The fact that we have a fully working final product. How to interface JavaScript with Python backend, and manually scrape a templated HTML doc for specific key words in specific locations Incorporate the ability to display alternative messages after a user types their initial response.
## Inspiration One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work. ## What it does The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts. ## How I built it I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up. ## Challenges I ran into Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me. ## Accomplishments that I'm proud of I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch. ## What I learned Data collection, hosting ML model over cloud, building Chrome extensions with various features ## What's next for Rate The Hack! I want to refine the features and rating scheme
## Inspiration A passion for crypto currencies and AI! ## What it does Allows users to have a conversation with an AI chatbot that supports information for over 5000 cryptocurrencies. It gives users a conversational assistant that can get realtime market updates. ## How we built it We used React to build out the frontend messaging UI. Then we built the backend with Node.js and used Microsoft Azure to produce accurate machine learning models. ## Challenges we ran into We ran into a few challenges. The main one was creating a successful machine learning model and creating a template string that allowed us to input the current crypto. ## Accomplishments that we're proud of We are proud to say that the bot works and has a user interface that one can use to converse with it. It allows for access to the many crypto currencies on the market. ## What we learned We learnt to use Azure to create a bot with machine learning models. We also learned to build robust machine learning models. ## What's next for Crypto Assistant (Conversational AI Chatbot) More conversation features, more chat options, better UI, and better training data.
partial
## Inspiration While tackling our university courses, we were faced with the task of implementing (and debugging) various data structures. We found that using traditional debuggers or print statements obscures the structure of the data, making testing and debugging code tedious and unintuitive. We were determined to find a way to take advantage of the inherent properties of data structures. With Visually Study Yo Code, you can see the data structure come together as you step through your code. ## What it does Visually Study Yo Code offers gives you the choice of using a graphical depiction of your variables to debug your JavaScript code. By right clicking a variable in the editor, you can open a tab which displays nodes to depict trees, linked lists and custom data structures. Nodes are added, deleted or modified as you step through your code in the debugger, making it easy to find and trace down errors in your algorithms. ## How we built it We created and deployed a Visual Studio Code Extension using the Extension API to let us integrate our new functionality into the editor. The graphical representation is constructed using canvas in HTML and displayed to the user by using a webview. To access the data, we interfaced with Visual Studio Code's Debug Adapter and parsed the information about the variables into a JSON object. ## Challenges we ran into The functionality of our extension was limited by what the Visual Studio Code Extension API provides. While we originally planned to add a command directly to the debugging menu, the ability to add new commands was constrained to the editor. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme. ## Accomplishments that we're proud of We are proud to have completed a project that we would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code benefits us directly by letting us create more robust code more quickly. ## What we learned Since we had not made an extension before, we learned how to use the Visual Studio Code Extension API. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective. ## Challenges we ran into The functionality of our extension was limited by what is possible with the Visual Studio Code Extension API. The ability to add new commands was constrained to the editor, so that we had to give up our original plan of right clicking the variables in the debugging menu. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme for the hackathon. ## Accomplishments that we're proud of We are proud to have completed a project that we ourselves would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code could benefit us in the future by letting us create more robust code more quickly. ## What we learned We learned how to use the Visual Studio Code Extension API, since we had not made an extension before. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective.
## Inspiration Today, anything can be learned on the internet with just a few clicks. Information is accessible anywhere and everywhere- one great resource being Youtube videos. However accessibility doesn't mean that our busy lives don't get in the way of our quest for learning. TLDR: Some videos are too long, and so we didn't watch them. ## What it does TLDW - Too Long; Didn't Watch is a simple and convenient web application that turns Youtube and user-uploaded videos into condensed notes categorized by definition, core concept, example and points. It saves you time by turning long-form educational content into organized and digestible text so you can learn smarter, not harder. ## How we built it First, our program either takes in a youtube link and converts it into an MP3 file or prompts the user to upload their own MP3 file. Next, the audio file is transcribed with Assembly AI's transcription API. The text transcription is then fed into Co:here's Generate, then Classify, then Generate again to summarize the text, organize by type of point (main concept, point, example, definition), and extract key terms. The processed notes are then displayed on the website and coded onto a PDF file downloadable by user. The Python backend built with Django is connected to a ReactJS frontend for an optimal user experience. ## Challenges we ran into Manipulating Co:here's NLP APIs to generate good responses was certainly our biggest challenge. With a lot of experimentation *(and exploration)* and finding patterns in our countless test runs, we were able to develop an effective note generator. We also had trouble integrating the many parts as it was our first time working with so many different APIs, languages, and frameworks. ## Accomplishments that we're proud of Our greatest accomplishment and challenge. The TLDW team is proud of the smooth integration of the different APIs, languages and frameworks that ultimately permitted us to run our MP3 file through many different processes and coding languages Javascript and Python to our final PDF product. ## What we learned Being the 1st or 2nd Hackathon of our First-year University student team, the TLDW team learned a fortune of technical knowledge, and what it means to work in a team. While every member tackled an unfamiliar API, language or framework, we also learned the importance of communication. Helping your team members understand your own work is how the bigger picture of TLDW comes to fruition. ## What's next for TLDW - Too Long; Didn't Watch Currently TLDW generates a useful PDF of condensed notes in the same order as the video. For future growth, TLDW hopes to grow to become a platform that provides students with more tools to work smarter, not harder. Providing a flashcard option to test the user on generated definitions, and ultimately using the Co-here Api to also read out questions based on generated provided examples and points.
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
partial
## Inspiration The idea for SafeOStroll came from those moments of unease many of us experience while walking alone at night. Imagine heading home after a long day or en route to meet friends. You pass through an area that feels unsafe, glancing over your shoulder, alert to every sound. That lingering sense of vulnerability is something most of us can relate to. This inspired us to create SafeOStroll—a personal safety tool designed to be your constant companion. Whether navigating dimly lit streets or unfamiliar environments, SafeOStroll provides reassurance and peace of mind. > > "As someone who often feels uneasy walking at night but has no choice but to do so after work or university, this app is a game changer for me. It gives me confidence knowing I have a companion to talk to—whether it's offering advice on safety or simply a comforting conversation." — Arina > > > Our goal is not just to enhance individual safety but to foster a greater sense of security in communities. By integrating emergency support, community alerts, and AI-driven emotional assistance, SafeOStroll aims to empower individuals and build safer environments worldwide. ## What It Does **SafeOStroll** is your ultimate safety companion, always by your side when you need it most. Whether you’re heading home late, passing through unfamiliar neighborhoods, or just feeling uneasy, SafeOStroll ensures you're never alone. The app’s core feature is its **AI-powered emotional assistant**, providing comfort and guidance in stressful moments. Whether walking down an empty street or facing an uncertain situation, the AI offers **reassurance, advice**, and **support**—like having a trusted companion with you at all times. This AI assistant engages in calming conversations and delivers **personalized suggestions** for handling anxiety or unsafe scenarios. It ensures that even when you're nervous, you're never left feeling isolated. Behind the scenes, SafeOStroll's **emergency features** are always ready to activate. In a crisis, you can trigger an **instant response**—alerting both authorities and nearby users. Your **location is shared**, creating a safety network that mobilizes in real-time. SafeOStroll goes beyond physical safety, addressing emotional well-being by combining **AI-driven support** with a **community safety network**, ensuring users feel connected and secure as they navigate public spaces. ### Stay calm. Stay connected. Stay safe. ## How We Built It Building SafeOStroll involved combining advanced technologies with a user-centered approach, ensuring seamless functionality and safety: ### AI-Powered Emotional Support At the heart of SafeOStroll is our **AI-powered assistant**, developed using **OpenAI’s GPT-4, tts1-hd, and whisper-1 API**. This AI engages in calming conversations and offers **actionable advice** during stressful situations. The AI continuously learns from user interactions, improving its ability to provide personalized support. We also utilized **WebSockets** to ensure real-time communication between users and the AI assistant, creating a more interactive and responsive experience. The goal: ensure users never feel alone. ### Real-Time Emergency Response The app’s emergency features are built on a robust **Django backend**. With one tap, users can send distress signals to **911** and notify nearby users. **Cloudflare** ensures fast and secure transmission of real-time data, offering safety at your fingertips. ### Mobile-Optimized Frontend Using **React**, we built a **mobile-first** interface that delivers a seamless experience across devices. The app updates user coordinates every 10 seconds, providing real-time tracking in emergencies. ### Design & Security The user interface is both calming and intuitive, with **gradient designs** and **hover effects** creating a sense of reassurance. Data is secured with **encryption**, ensuring all user information stays private. --- ## Challenges We Faced Building SafeOStroll presented unique challenges that tested our technical and creative abilities. ### 1. AI Responsiveness Creating an AI that felt natural while offering **timely advice** was a key challenge. We had to balance providing calming conversations with actionable suggestions, ensuring the AI felt **supportive but not clinical**. Also, making it so that the AI would have its own allocated memory was a challenge we had to overcome. ### 2. Real-Time Location Tracking Implementing accurate **real-time location tracking** without draining users' battery required significant optimization. We needed to maintain frequent updates while minimizing energy consumption. ### 3. Data Privacy & Security Handling sensitive user data, like locations and emergency signals, raised significant privacy concerns. We had to ensure all communications were encrypted while keeping the app responsive. ### 4. User-Friendly Design Creating an intuitive, **reassuring interface** for users in distress was more challenging than expected. We had to ensure that the emergency and AI features were easy to access without overwhelming the user. ### 5. API Integration Since this was our first time integrating the OpenAI API, figuring out how to get the different AI systems (TTS and STT) to interact with the user while still having an optimal response time was challenging. We also had to be careful with the training we gave the AI, as we didn't want the AI to act as a therapist but rather as a friend who can give you specialized advice. --- ## Accomplishments We're Proud Of We achieved several key milestones in developing SafeOStroll, each reflecting our dedication to creating a reliable and secure safety tool. ### 1. AI-Driven Emotional Support Our **AI assistant** offers real-time emotional support using **OpenAI’s GPT-4, tts1-hd, and whisper-1 API**, providing calming conversations and personalized advice that adapts over time through integration. ### 2. Diversity Another part of our project that we are proud of is that we can provide AI conversations in all current live languages, making it so that all users can communicate with the AI through their native tongue. ### 3. Seamless Real-Time Emergency Response We developed a **real-time alert system** that connects users with emergency services and nearby SafeOStroll users. Powered by **Django** and **Cloudflare**, this system ensures distress signals are transmitted securely and swiftly. ### 4. Optimized Location Tracking Our software updates user coordinates every 10 seconds to ensure accurate & precise location tracking without significant battery drain, improving emergency response accuracy. ### 5. User-Centered Design Our **mobile-first** interface prioritizes ease of use, making it simple for users to send alerts, access the AI assistant, and navigate features during stressful moments. ### 6. Robust Data Privacy & Security We ensured all user data is encrypted, providing a secure experience without compromising performance. ### 7. Secured Connection We secured the connection between our **React** frontend and the insecure backend host through **Cloudflare**, enhancing the overall security of the application. --- ## What We Learned The SafeOStroll development journey taught us valuable lessons about technology, design, and user needs. ### 1. User-Centered Design We learned the importance of **constant iteration** and feedback in creating a user-friendly interface, especially for users in distress. ### 2. AI Empathy Designing an AI that provides emotional support without seeming robotic was challenging. We learned the importance of natural conversation flow and empathetic responses. ### 3. Security Is Essential Handling sensitive user data highlighted the need for robust **encryption** and **privacy protocols** to maintain user trust and protect their information. ### 4. Optimizing Real-Time Systems We gained insight into **optimizing real-time systems**, ensuring fast, reliable, and energy-efficient performance. ### 5. WebSockets We learned that we can use WebSockets for communication between server and client for having stateful conversations. --- ## What's Next for SafeOStroll SafeOStroll’s journey is far from over, and we have exciting plans for the future. ### 1. Expanding AI Capabilities We plan to further enhance the AI’s ability to provide **tailored support**, learning from user interactions to offer more personalized advice. ### 2. Health Data Tracking Integration In the future, using technologies like Fitbit or Apple Watch, we can track the user's health data (such as BPM and stress detection) to have the AI more accurately assess the user's real-time situation. ---
## Inspiration As women ourselves, we have always been aware that there are unfortunately additional measures we have to take in order to stay safe in public. Recently, we have seen videos emerge online for individuals to play in these situations, prompting users to engage in conversation with a “friend” on the other side. We saw that the idea was extremely helpful to so many people around the world, and wanted to use the features of voice assistants to add more convenience and versatility to the concept. ## What it does Safety Buddy is an Alexa Skill that simulates a conversation with the user, creating the illusion that there is somebody on the other line aware of the user’s situation. It intentionally states that the user has their location shared and continues to converse with the user until they are in a safe location and can stop the skill. ## How I built it We built the Safety Buddy on the Alexa Developer Console, while hosting the audio files on AWS S3 and used a Twilio messaging API to send a text message to the user. On the front-end, we created intents to capture what the user said and connected those to the backend where we used JavaScript to handle each intent. ## Challenges I ran into While trying to add additional features to the skill, we had Alexa send a text message to the user, which then interrupted the audio that was playing. With the help of a mentor, we were able to handle the asynchronous events. ## Accomplishments that I'm proud of We are proud of building an application that can help prevent dangerous situations. Our Alexa skill will keep people out of uncomfortable situations when they are alone and cannot contact anyone on their phone. We hope to see our creation being used for the greater good! ## What I learned We were exploring different ways we could improve our skill in the future, and learned about the differences between deploying on AWS Lambda versus Microsoft Azure Functions. We used AWS Lambda for our development, but tested out Azure Functions briefly. In the future, we would further consider which platform to continue with. ## What's next for Safety Buddy We wish to expand the skill by developing more intents to allow the user to engage in various conversation flows. We can monetize these additional conversation options through in-skill purchases in order to continue improving Safety Buddy and bring awareness to more individuals. Additionally, we can adapt the skill to be used for various languages users speak.
## Inspiration We aim to reduce the impact of such problems by introducing singulARity and redefining how people interact online. We strive to motivate a healthy, positive, and fun augmented reality environment. ## What it does SingulARity will be a platform that always protects its core value, which is an anonymous medium that motivates positive mental health, physical health, and helping people to see the charm of our surroundings. ## How we built it We used Google Cloud with Fire Storage to incorporate with echoAR to store location and backend data. For the frontend, we used JavaScript with the VueJS framework to implement our UI along with Google Map Api to help us reach our goal. ## Challenges we ran into When collaborating online, we've faced many difficulties. For example, communication through an online platform is very insufficient and often delays the process of development. ## Accomplishments that we're proud of Finish the MVP of this project at the weekends. ## What we learned * Brand new tech stack from the sponsors. * The feeling of working in a team purely online. ## What's next for singularity * Users will be able to challenge their friends by adding their own pins to the map. * Each visited pin will give user certain points which can be redeemed for prizes. * View your AR design in real life! Put your ideas in text, images, videos in 3D, and show it to the world! * Based on each user's weekly interactions (eg. their steps), we can analyze these data using ML and generate pins on the map to further increase their activities
losing
## Inspiration Our inspiration stemmed from the desire to implement a machine learning / A.I. API. ## What it does Taper analyzes images using IBM's Watson API and our custom classifiers. This data is used to query the USDA food database and return nutritional facts about the product. ## How we built it Using android Studio and associated libraries we created the UI in the form of an Android App. To improve Watson's image recognition we created our custom classifier and to recognize specific product brands. ## Challenges we ran into For most of us this was our first time using both Android Studios and Watson so there was a steep initial learning curve. Additionally we attempted to use Microsoft Azure along side Watson but were unsuccessful. ## Accomplishments that we're proud of -Successful integrating Watson API into a Android App. -Training our own visual recognition classifier using python and bash scripts. -Retrieving a products nutritional information based on data from visual recognition. ## What we learned We experience and learned the difficulty of product integration. As well, we learned how to better consume API's ## What's next for taper -Creating a cleaner UI -Text analysis of nutritional data -day to day nutrition tracking
## Inspiration We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems. ## What it does Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well. ## How we built it We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed. ## Challenges we ran into Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app. ## Accomplishments that we're proud of Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson. ## What we learned Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs. ## What's next for AEye Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework. See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679>
## Inspiration Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS. ## What it does macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve. ## How we built it DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript. ## Challenges we ran into Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all. ## Accomplishments that we're proud of We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with. ## What we learned We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities. ## What's next for macroS We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this!
partial
## Inspiration When you're in a mental health crisis, taking care of yourself and remembering your coping strategies is difficult. Texting your friends for help is exhausting and impossible. Under Wraps allows you send messages for help to your friends in a time of crisis (no stress, all at just the click of a button!), while guiding you through steps to take care of yourself. ## What it does Panic button that alerts your support network that you need help, and a place to plan ahead your crisis recovery plan (WRAP). Use stress tolerance tools to reduce your distress in a time of need. Ask for support from others while providing support for yourself. \*\* With this app, you'll have it all under wraps!\*\* ## How we built it Flutter and dart. ## Accomplishments that we're proud of ## What we learned ## What's next for Under Wraps
## Inspiration Inspired by mental health needs and the popular app BeReal, we thought it was important for users to have a space to look inwards and reflect on their feelings and support themselves. ## What it does It prompts users to say how they're doing and complete one self care activity. Once that is completed, we have a large range of other activities available to browse. ## How we built it We used the Android Firebase hackpack to get started, working in Android Studio with java and xml files. We did everything from mental health research to fullstack development. ## Challenges we ran into Setting up the necessary tools was a large barrier coming from different platforms. Android Studio was also a learning curve since we are both complete add dev beginners, and had never used any similar IDE. ## Accomplishments that we're proud of Creating a finished product that's straight-forward yet effective and has the potential to help people much like ourselves. ## What we learned We learned about the full process of brainstorming ideas, conceptualizing a product, and implementing those ideas into a completed interface. ## What's next for HAY (How Are You?) We'd love to do more research and include accessible citations for those sources, and make the UI more engaging and easy to use. We'd like to add more tools for users such as goal tracking and achievements for continued self care.
## Inspiration The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way. ## What it does VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others. ## How we built it We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked. ## Challenges we ran into One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios. ## Accomplishments that we're proud of In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them. ## What we learned Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result! ## What's next for VenTalk There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
losing
## We began thinking about this idea the night before the competition, while we were all frantically trying to finish homework since we would be at Calhacks over the entire weekend. Additionally, a few members of our team have ADHD, and focusing for extended periods of time can be difficult. We looked at some other websites that claimed to have similar functionality, but they often failed to effectively summarize anything beyond a basic news article. ## Our software takes in an image, and after parsing the layout of the document, it creates a summary of the document using custom models that we made with Cohere's API, so that a more accurate summary can be created depending on the subject and type of content. ## We used a library called LayoutParser to identify relevant text from the submitted image, and we took advantage of some publicly available datasets and Cohere's Finetune feature to create our own models for different types of documents. The website itself was programmed using HTML, CSS, and Vanilla JS integrated with Flask. ## This was our first time creating anything with this sort of functionality, so integrating the front-end and back-end was new to us. Understanding and reformatting the datasets was also confusing at first since we don't have much experience with data processing, but we were eventually able to get existing datasets into the format that we wanted. ## We're really happy about everything, since it's our first hackathon and project of this kind in general. There's a lot we could do better, but we're proud that we were able to actually have something to submit. ## We learned a lot about frameworks like Flask to integrate the back-end with the front-end of the website, and we learned a lot about data processing. Cohere's API also introduced us to the field of NLP, and some of the advanced functionality that powerful models can offer. ## We would like to create more models for other subjects and document types that we and our friends regularly deal with, like documents that are in old english or wikipedia/wikimedia pages.
## Inspiration We were originally considering creating an application that would take a large amount of text and summarize it using natural language processing. As well, Shirley Wang feels an awkward obligation to incorporate IBM's Watson into the project. As a result, we came up with the concept of putting in an image and getting a summary from the Wikipedia article. ## What it does You can input the url of a picture into the webapp, and it will return a brief summary in bullet point form of the Wikipedia article on the object identified within the picture. ## How we built it We considered originally using Android Studio, but then a lot of problems occurred trying to make the software work with it, so we switched over to Google App Engine. Then we used Python to build the underlaying logic, along with using IBM's Watson to identify and classify photos, the Wikipedia API to get information from Wikipedia articles, and Google's Natural Language API to get only the key sentences and shorten them down to bullet point form while maintaining its original meaning. ## Challenges we ran into We spent over 2 hours trying to fix a Google Account Authentication problem, which occurred because we didn't know how to properly write the path to a file, and Pycharm running apps is different from Pycharm running apps in its own terminal. We also spent another 2 hours trying to deploy the app, because Pycharm had a screwed up import statement and requirements file that messed up a lot of it. ## Accomplishments that we're proud of This is our first hackathon and our first time creating a web app, and we're really happy that we managed to actually successfully create something that works. ## What I learned Sometimes reading the API carefully will save you over half of your debugging time in the long run. ## What's next for Image Summarizer Maybe we'll be able to make a way for the users to input a photo directly from their camera or their computer saved photos.
## Inspiration💡 As exam season concludes, we came up with the idea for our hackathon project after reflecting on our own experiences as students and discussing common challenges we face when studying. We found that searching through long textbooks and PDFs was a time-consuming and frustrating process, even with search tools such as CTRL + F. We wanted to create a solution that could simplify this process and help students save time. Additionally, we were inspired by the fact that our tool could be particularly useful for students with ADHD, dyslexia, or anyone who faces difficulty reading large pieces of text Ultimately, our goal was to create a tool that could help students focus on learning, efficiently at that, rather than spending unnecessary time on searching for information. ## What it does 🤨 [docuMind](https://github.com/cho4/PDFriend) is a web app that takes a PDF file as input and extracts text to train GPT-3.5. This allows for summaries and accurate answers according to textbook information. ## How we built it 👷‍♂️ To bring life to docuMind, we employed ReactJS as the frontend, while using Python with the Flask web framework and SQLite3 database. After extracting the text from the PDF file and doing some data cleaning, we used OpenAI to generate word embeddings that we used pickle to serialize and store in our database. Then we passed our prompts and data to langchain in order to provide a suitable answer to the user. In addition, we allow users to create accounts, login, and access chat history using querying and SQLite. ## Challenges we ran into 🏋️ One of the main challenges we faced during the hackathon was coming up with an idea for our project. We had a broad theme to work with, but it was difficult to brainstorm a solution that would be both feasible and useful. Another challenge we encountered was our lack of experience with Git, which at one point caused us to accidentally delete a source folder, spending a good chunk of time recovering it. This experience taught us the importance of backing up our work regularly and being more cautious when using Git. We also ran into some compatibility issues with the technologies we were using. Some of the tools and libraries we wanted to incorporate into our project were either not compatible with each other or presented problems, which required us to find workarounds or alternative solutions. ## Accomplishments that we're proud of 🙌 Each member on our team has different things we’re proud of, but generally we are all proud of the project we managed to put together despite our unfamiliarity with many technologies and concepts employed. ## What we learned 📚 We became much more familiar with the tools and techniques used in natural language processing, as well as frontend and backend development, connecting the two, and deploying an app. This experience has helped us to develop our technical skills and knowledge in this area and has inspired us to continue exploring this field further. Another important lesson we learned during the hackathon was the importance of time management. We spent a large portion of our time brainstorming and trying to come up with a project idea, which led to being slightly rushed when it came to the execution of our project. We also learned the importance of communication when working in a team setting. Since we were working on separate parts of the project at times, it was essential to keep each other updated on our progress and any changes we made. This helps prevent accidents like accidental code deletion or someone getting left behind so far they can’t push their code to our repository. Additionally, we learned the value of providing clear and concise documentation to help others understand our code and contributions to the project. ## What's next for docuMind 🔜 To enhance docuMind’s usability, we intend to implement features such as scanning handwritten PDFs, image and diagram recognition, multi-language support, audio input/output and cloud-based storage and collaboration tools. These additions could greatly expand the tool's utility and help users to easily organize and manage their documents.
losing
## Inspiration Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that. ## What it does By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels. ## How we built it On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database. On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text. ## Challenges we ran into Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated. ## Accomplishments that we're proud of Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API. ## What we learned We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered. ## What's next for Immersify The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist.
## Inspiration We explored IBM Watson and realized its potentials and features that enable people to make anything they want using its cloud skills. We all read and we always want to read books/articles which suites our taste. We made this easier using our web app. Just upload the pdf file and get detailed entities, keywords, concepts, and emotions visually in our dashboard. ## What it does Our web app analyzes the content of the articles using IBM NLU and displays entities, keywords, concepts and emotions graphically ## How I built it Our backend is developed using Springboot and Java while the front end is designed using bootstrap and HTML. We used d3.js for displaying graphical representation of data. The content of the article is read using the Apache Tika framework. ## Challenges I ran into Completing a project within 24 hours was a big challenge. We also struggled connecting front end and backend. Fortunately, we found a template and we leveraged it to develop our project. ## Accomplishments that I'm proud of We are proud to say that we worked as a team aiming for a specific prize and we were able to finish the project with pretty much all the features we wanted. ## What I learned We learned the potential of IBM Watson NLU and other IBM Cloud technologies . We also learned different technologies like d3.js, springboot which we were not familiar with. ## What's next for Know before you read We want this app accessible to more people and we are planning to deploy it after finishing up the UI.
## Inspiration = I wanted to have something of this time, so a young audience would be attracted to it. ## What it does = click buttons on the website to hear different popular Soulja Boy phrases ## How I built it - HTML and CSS, and adobe illustrator for the drawing ## Challenges I ran into - I tried to use android studio and the twilio APIs for the first time, but I had a lot of challenges ## Accomplishments that I'm proud of - Staying focused and learning a bunch of new code skills ## What I learned - Never give up, even if something is too tough, find an alternative path. ## What's next for Soulja Boy's Sound Box an APP one day!
partial
## Inspiration We're a trio of a violinist, a cellist, and an calligrapher who all met in Stanford's intro CS course last fall. We loved it so much that we knew we had to keep going, and TreeHacks was the perfect opportunity to immerse ourselves in our first hackathon. We knew we wanted to combine our interests in music, art, and CS to create something special, and so AudioArt was born. ## What it does AudioArt is a unique platform that detects audio input, and uses varying pitches to dynamically generate a phyllotactic spiral in real time. The program detects pitches in sequence, and uses the frequency of each pitch to determine the seed angle and RBG color of a phyllotactic spiral that is generated specifically for that pitch. The resulting phyllotactic spirals are displayed in a graphics window, and create a colorful, animated piece of digital artwork. ## How we built it We implemented open-source pitch-detection code to detect audio input and convert it to numerical data. We then designed a mathematical algorithm that uses Java Swing interactors to generate a customized phyllotactic spiral using this data. ## Challenges we ran into Implementing multithreading in order to accomplish multiple tasks concurrently, while also transferring information to one of the threads. ## Accomplishments that we're proud of We're incredibly proud to have finished our first ever hackathon! All three of us team members have less than two quarters of computer science under our belts, so successfully building AudioArt within 36 hours was an exciting achievement. ## What we learned We learned to be persistent, even when we encountered bugs that seemed impossible to fix. Furthermore, we learned to successfully integrate open-source software into our code, and to use multithreading to run multiple processes simultaneously. Most of all, we experienced what it was like to code under a time crunch. ## What's next for AudioArt We hope that in the future, we'll be able to extend AudioArt to generate many more works of art based on audio input. Our ultimate goal is to create an immersive experience that seamlessly bridges music and visual art, and which uses elements like volume, tone, tempo, and dissonance to represent unique patterns through art.
## ABOUT OUR PROJECT Our project is Chromatisound, a collection of audio visualization APIs intended to be used by the Hearing Impaired and in music education. Our inspiration for this project came from our desire to share our interest in music with the world in a new and exciting way. We realized that we could accomplish this by translating the characteristics of music (amplitude, pitch, rhythm) into visual and analytical form factors. The results of our efforts is a UI that procedurally generates an image based on sound snippets called ChromaSound. Our algorithm inputs sound data as amplitude over time and to output the following parameters: shape, size, color, and position as a vector. We use signal processing to identify relavent frequencies within our data and use them to plot a range of colors, shapes, and vector angles. With the signal amplitude dictates the size of the shape and vector magnitude, each sound snippet will generate a completely unique image based on the loudness and pitch of the sounds heard. Our other work for this project include two API backends that process sound information in real time, Chordify and VoiceHarmony. Each of these programs performs the fourier transform on small chunks of data at a time, identifing what frequencies are most relevent. Once isolated, the frequencies are fed into a chord recognition API called pychord to determine the musical chord or chord progression the sound data represents. In the future, we hope to implement these systems, both sound analysis and visualization, in a webApp to process sound input in real time, creating a new form factor in which sound can be experienced. We hope you found our project interesting, please be sure to let us know what you think! ## ABOUT US We are a group of high school students from the Haverford School with a shared passion for innovation in software engineering. Our team members, freshmen Elijah Lee and Josiah Somani, junior Alexander Greer, and senior Alex Sun, collectively held experience in graphic and web-app design, signal processing, and data analysis in a host of languages. Our goal in attending this hackathon was to learn as much as we could from the many talented computer scientists and engineers working alongside us, and hopefully create something cool along the way! We hope you take a look through our project and let us know what you think!
> > Domain.com domain: IDE-asy.com > > > ## Inspiration Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code. ## What it does Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE. ## How we built it We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE. ## Challenges we ran into > > "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume. > The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad* > > > "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac* > > > "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir* > > > "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris* > > > ## Accomplishments that we're proud of > > "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad* > > > "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac* > > > "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir* > > > "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris* > > > ## What we learned > > "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad* > > > "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac* > > > "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir* > > > "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris* > > > ## What's next for QuickCode We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code.
losing
## Inspiration Companies lack insight into their users, audiences, and marketing funnel. This is an issue I've run into on many separate occasions. Specifically, * while doing cold marketing outbound, need better insight onto key variables of successful outreach * while writing a blog, I have no idea who reads it * while triaging inbound, which users do I prioritize Given a list of user emails, Cognito scrapes the internet finding public information about users and the companies they work at. With this corpus of unstructured data, Cognito allows you to extract any relevant piece of information across users. An unordered collection of text and images becomes structured data relevant to you. ## A Few Example Use Cases * Startups going to market need to identify where their power users are and their defining attributes. We allow them to ask questions about their users, helping them define their niche and better focus outbound marketing. * SaaS platforms such as Modal have trouble with abuse. They want to ensure people joining are not going to abuse it. We provide more data points to make better judgments such as taking into account how senior of a developer a user is and the types of companies they used to work at. * VCs such as YC have emails from a bunch of prospective founders and highly talented individuals. Cognito would allow them to ask key questions such as what companies are people flocking to work at and who are the highest potential people in my network. * Content creators such as authors on Substack looking to monetize their work have a much more compelling case when coming to advertisers with a good grasp on who their audience is. ## What it does Given a list of user emails, we crawl the web, gather a corpus of relevant text data, and allow companies/creators/influencers/marketers to ask any question about their users/audience. We store these data points and allow for advanced querying in natural language. [video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0) ## How we built it we orchestrated 3 ML models across 7 different tasks in 30 hours * search results person info extraction * custom field generation from scraped data * company website details extraction * facial recognition for age and gender * NoSQL query generation from natural language * crunchbase company summary extraction * email extraction This culminated in a full-stack web app with batch processing via async pubsub messaging. Deployed on GCP using Cloud Run, Cloud Functions, Cloud Storage, PubSub, Programmable Search, and Cloud Build. ## What we learned * how to be really creative about scraping * batch processing paradigms * prompt engineering techniques ## What's next for Cognito 1. predictive modeling and classification using scraped data points 2. scrape more data 3. more advanced queries 4. proactive alerts [video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
## Inspiration We hate making resumes and customizing them for each employeer so we created a tool to speed that up. ## What it does A user creates "blocks" which are saved. Then they can pick and choose which ones they want to use. ## How we built it [Node.js](https://nodejs.org/en/) [Express](https://expressjs.com/) [Nuxt.js](https://nuxtjs.org/) [Editor.js](https://editorjs.io/) [html2pdf.js](https://ekoopmans.github.io/html2pdf.js/) [mongoose](https://mongoosejs.com/docs/) [MongoDB](https://www.mongodb.com/)
## Inspiration The inspiration for this project was the recent wildfires in Canada, which have polluted and damaged the air quality throughout the world. With this website, we strive to raise awareness of the environment, which has been getting damaged throughout the years, and educate people about the air quality. ## What it does It shows the air quality data of any location worldwide requested by the client. The data includes Carbon Monoxide, Nitrogen Dioxide, Ozone, Sulphur Dioxide, and Particulate Matter 2.5 and 1.0. ## How we built it This website is built using HTML and Bootstrap for the front end and JavaScript and Air Quality by API-Ninja for the back end. We divided the work and had different members work on different parts of the code. By doing so, we had to communicate with other teammates about the code we were working on, the different ideas we had, and the challenges we encountered. ## Challenges we ran into During the hackathon, we encountered many challenges. At first, one challenge that we encountered was coming up with ideas. Since it was the first hackathon for most of us, we did not know what to expect. After coming up with an idea, we encountered many more problems. First, we encountered the problem of learning JavaScript. Although some of us had experience with JavaScript and HTML, we also had some members not familiar with these languages and the usage of APIs. However, after this hackathon, we believe that this has been a learning experience for everyone and an experience that enhanced our technical and communication skills. ## Accomplishments that we're proud of As a beginner-friendly group, we were able to create a fully functioning air-indexing website that contributes to sustainability and environmental purposes. Furthermore, we are proud that we were able to work effectively as a team and enhance our technical abilities. ## What we learned We learned how to use APIs more effectively as well as the value of collaboration between team members and learned how to communicate our work. ## What's next for Air Quality Indexing Website In the future, we want to implement features that also raise awareness for the environment. Some examples are displaying plastic usage from all around the world and garbage detection. We also want find ways to further optimize our code, reducing the amount of energy required to run the code, which will help the environment.
winning
When I went to college I noticed that so many people were into retro gaming. Why? I don't know, and I could care less. It's awesome. Anyone who follows video games knows what Pong is, and after seeing so many recreations (or ripoffs) of classic games on mobile I was surprised that nobody recreated it. It's aesthetically simple so I thought I'd give it a shot, but I'd make it a little bit fancy with motion control. I used the Myo motion sensor's rotation capabilities to control your user's Pong platform against an unbeatable AI, and your job is to not die. That's pretty much it, but you'd be surprised how addicting and pretty the app is. If you want a hack that uses every API and piece of hardware available at this hackathon, then this isn't for you. But if you want something simple and unbelievably addicting, like the next Doodle Jump or Crossy Road that you somehow play for an hour, then come take a look.
## Inspiration We didn't know where to go and nobody wanted the pressure of picking. ## What it does It automatically books a Lyft for you to go to a random nearby restaurant. We also built a dashboard for restaurants to pay for a premium service where they get feedback from their users and have more customers routed from them, with an option to get more from specific locations.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
losing
## Inspiration Personal Immunization Records - small, yellow accordion cards. These important immunization tracking documentations are easily lost when families move, doctors retire and are frequently misplaced. Additionally, records are only kept in the clinics' databases for a certain number of years - if this card was your only documentation of your vaccines, what will you do? While there is an app that allows you to save a picture and manually input your records, wouldn't it be easier for YOU to read and save, if you ONLY had to take the picture and have the app sort the data into a nice table for you? Introducing Digimmune, the user-friendly digital vaccine tracker! ## What it does In simple terms, Digimmune allows anyone to keep a digital copy of their personal immunization records by taking a picture of the card. The Google Cloud Vision API then arranges the data into an easy-to-read table, which can be sorted by the vaccine name, expiry date (if applicable) and the vaccination date. No more lost cards and panicked parents! ## How we built it Utilizing the Google Cloud Vision API, we were able to parse through data on personal immunization record cards and sort that data into a table. We then created an interactive website with the django framework, python, HTML, CSS & JS. ## Challenges we ran into Once we assigned tasks to each person, the first thing we had to do was watch/read tutorials for django, web design and optical character recognition (OCR). We were all unfamiliar with the tools we needed to achieve our end goal and ran into numerous roadblocks while experimenting. However, we were able to overcome these obstacles by persevering and helping one another! ## Accomplishments that we're proud of Woohoo we got this in before noon! ## What we learned Echoing the notes from the 'Challenges we ran into,' we learned new languages and skills and most of all, how everything worked systematically and with one another (parsing json & returning an object + django + HTML, CSS & JS) ## What's next for Digimunne While we created a website designed for a desktop platform, the next step is to move to mobile devices, as shown by the above wireframes! Mobile will create an elevated user experience as hand gestures allow for simpler navigation, for this particular application
## Inspiration Blog Post about parent who was unable to get resources to check color vision impairment in child. Every 1 in 10 people have colorblindness. ## What it does There are two parts to this website. One is the actual test which tests if a person has color blindness. The second is a game mode which is an interactive way to check whether or not your eyesight is good or not. This is specifically designed for younger children. ## How I built it Built it using HTML and JavaScript with the help of StackOverFlow and other websites. ## Challenges I ran into Many than I can list. One challenge was how to grade the test. Another was the formatting for the game. ## Accomplishments that I'm proud of We were able to complete the entire project in the allotted time. ## What I learned Learned how to code in JavaScript. ## What's next for Color Vision Test Improving the tests. Improve the code and make it more efficient.
## Inspiration 💭 With the current staffing problem in hospitals due to the lingering effects of COVID-19, we wanted to come up with a solution for the people who are the backbone of healthcare. **Personal Support Workers** (or PSWs) are nurses that travel between personal and nursing homes to help the elderly and disabled to complete daily tasks such as bathing and eating. PSWs are becoming increasingly more needed as the aging population grows in upcoming years and these at-home caregivers become even more sought out. ## What it does 🙊 Navcare is our solution to improve the scheduling and traveling experience for Personal Support Workers in Canada. It features an optimized shift schedule with vital information about where and when each appointment is happening and is designed to be within an optimal traveling distance. Patients are assigned to nurses such that the nurse will not have to travel more than 30 mins outside of their home radius to treat a patient. Additionally, it features a map that allows a nurse to see all the locations of their appointments in a day, as well as access the address to each one so they can easily travel there. ## How we built it 💪 Django & ReactJS with google maps API. ## Challenges we ran into 😵 Many many challenges. To start off, we struggled to properly connect our backend API to our front end, which was essential to pass the information along to the front end and display the necessary data. This was resolved through extensive exploration of the documentation, and experimentation. Next while integrating the google maps API, we continuously faced various dependency issues as well as worked to resolve more issues relating to fetching data through our Django rest API. Since it was our first time implementing such an infrastructure, to this extent, we struggled at first to find our footing and correctly connect and create the necessary elements between the front and back end. However, after experimenting with the process and testing out different elements and methods, we found a combination that worked! ## Accomplishments that we're proud of 😁 We made it! We all felt as though we have learned a tremendous amount. This weekend, we really stepped out of our comfort zones with our assignments and worked on new things that we didn't think we would work on. Despite our shortcomings in our knowledge, we were still able to create an adequately functioning app with a sign-in feature, the ability to make API requests, and some of our own visuals to make the app stand out. If given a little more time, we could have definitely built an industry-level app that could be used by PSWs anywhere. The fact we were able to solve a breadth of challenges in such little time gives us hope that we BELONG in STEM! ## What's next for Navcare 😎 Hopefully, we can keep working on Navcare and add/change features based on testing with actual PSWs. Some features include easier input and tracking of information from previous visits, as well as a more robust infrastructure to support more PSWs.
losing
Plenty of people hope to travel and explore the world, and it's no surprise that traveling often ends up on bucket lists. However, flights are expensive, and trying to figure out when and to where airline tickets will be cheapest is exhausting. Our project, Dream-Flight, hopes to make this process easier. By creating a visualization of flight prices and data, we hope to make planning those dream trips simpler. Dream-Flight allows the user to enter their departure location, as well as easily adjustable departure dates, travel duration, and budget. With just a few simple steps, users will see a mapped visualization of airports all over the world that offer flights that fit their travel criteria, marked by circles whose color reflects price point and size reflects destination popularity. The flight visualization provides a crystal clear view of price points for flights to different locations at different times with just a quick glance. Whether it's a Spring Break vacation with friends, a trip to visit family, or an exploration abroad, finding a dream travel destination becomes easier with Dream-Flight! Visit <https://dream-flights.herokuapp.com/main.html> to see Dream-Flight in action! To check out our repo, please visit our GitHub: <https://github.com/PaliC/Dream-Flight>
## Inspiration Planning vacations can be hard. Traveling is a very fun experience but often comes with a lot of stress of curating the perfect itinerary with all the best sights to see, foods to eat, and shows to watch. You don't want to miss anything special, but you also want to make sure the trip is still up your alley in terms of your own interests - a balance that can be hard to find. ## What it does explr.ai simplifies itinerary planning with just a few swipes. After selecting your destination, the duration of your visit, and a rough budget, explr.ai presents you with a curated list of up to 30 restaurants, attractions, and activities that could become part of your trip. With an easy-to-use swiping interface, you choose what sounds interesting or not to you, and after a minimum of 8 swipes, let explr.ai's power convert your opinions into a full itinerary of activities for your entire visit. ## How we built it We built this app using React Typescript for the frontend and Convex for the backend. The app takes in user input from the homepage regarding the location, price point, and time frame. We pass the location and price range into the Google API to retrieve the highest-rated attractions and restaurants in the area. Those options are presented to the user on the frontend with React and CSS animations that allow you to swipe each card in a Tinder-style manner. Taking consideration of the user's swipes and initial preferences, we query the Google API once again to get additional similar locations that the user may like and pass this data into an LLM (using Together.ai's Llama2 model) to generate a custom itinerary for the user. For each location outputted, we string together images from the Google API to create a slideshow of what your trip would look like and an animated timeline with descriptions of the location. ## Challenges we ran into Front-end and design require a LOT of skill. It took us quite a while to come up with our project, and we originally were planning on a mobile app, but it's also quite difficult to learn completely new languages such as swift along with new technologies all in a couple of days. Once we started on explr.ai's backend, we were also having trouble passing in the appropriate information to the LLM to get back proper data that we could inject back into our web app. ## Accomplishments that we're proud of We're proud at the overall functionality and our ability to get something working by the end of the hacking period :') More specifically, we're proud of some of our frontend, including the card swiping and timeline animations as well as the ability to parse data from various APIs and put it together with lots of user input. ## What we learned We learned a ton about full-stack development overall, whether that be the importance of Figma and UX design work, or how to best split up a project when every part is moving at the same time. We also learned how to use Convex and Together.ai productively! ## What's next for explr.ai We would love to see explr.ai become smarter and support more features. explr.ai, in the future, could get information from hotels, attractions, and restaurants to be able to check availability and book reservations straight from the web app. Once you're on your trip, you should also be able to check in to various locations and provide feedback on each component. explr.ai could have a social media component of sharing your itineraries, plans, and feedback with friends and help each other better plan trips.
## Inspiration We love to travel and have found that we typically have to use multiple sources in the process of creating an itinerary. With Path Planner, the user can get their whole tripped planned for them in one place. ## What it does Build you an itinerary for any destination you desire ## How we built it Using react and nextjs we developed the webpage and used chatGpt API to pull an itinerary with out given prompts ## Challenges we ran into Displaying a Map to help choose a destination ## Accomplishments that we're proud of Engineering an efficient prompt that allows us to get detailed itinerary information and display it in a user friendly fashion. ## What we learned How to use leaflet with react and next.js and utilizing leaflet to input an interactive map that can be visualized on our web pages. How to engineer precise prompts using open AI's playground. ## What's next for Path Planner Integrating prices for hotels, cars, and flights as well as having a login page so that you can store your different itineraries and preferences. This would require creating a backend as well.
partial
## Inspiration Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops. ## What it does EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app. -The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations. -The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa). -The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique. -The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers. EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time. ## How we built it We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js. ## Challenges we ran into We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data. We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults. ## Accomplishments that we're proud of EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can. ## What we learned 1. Get a better String class than the default Arduino one. 2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability. ## What's next for EDT - A Lightweight Tablet for Education More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything.
## Inspiration We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students ## What it does The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid. ## How we built it React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons ## Challenges we ran into React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native ## What we learned New exposure APIs and gained experience on linking tools together ## What's next for Scrappy.io Improvements to the web scraper, potentially expanding beyond restaurants.
# Inspiration The turn of the new year was accompanied by the news of fire ravaging most of Australia. In total, an estimated of over 1 billion animals are estimated to have died along with over 25 million acres of land. The effects of the fire left a lasting impression on our views of how technology can be used for good and helped us discover a newfound sense of purpose. We sought to use our technological prowess to ensure that an issue like this doesn't become a common occurrence. Fireguard is an Internet of Things solution designed to detect and prevent forest fires at the instant they arise. By placing this tool in different places in the forest, we are able to monitor important vitals about the environment and detect sudden changes in temperature, CO2 levels, atmospheric pressure and volume of total organic compounds. We then retrieved this information and created a unique web user interface that responded to the real-time changes in data to relay information across all individuals over watching the situation. The scope of our hackathon project was: 1. A hardware device that monitors environment conditions 2. A dashboard that allows one to monitor the status of all the beacons and alert EMS if they detect sudden changes in environment conditions 3. Geofencing integration to alert others in the area to leave before the situation gets out of hand # Fireguard Tech Stack **Machine Learning:** *(time series forecasting)* The machine learning aspect one which we believed could play a key role in our solution yet needed a strong use case for. Initially, we wanted to use an attached camera and the google cloud vision API to confirm fires and reduce the number of false positives created by this data. After some thought, we realized that this would be a misuse of the data we collect and decided to use **time series forecasting.** This model works by detecting sudden changes in data in order to detect anomalies. We used a seasonality change model to account for large variations slowly brought upon over large amounts of time, such as season changes. In order to test, we simulated a fire on our device and tracked the data in a csv file. That data was passed into the model and the anomalies were rendered on our alert dashboard. **Web View:** The information on Google Firebase is then retrieved and displayed on a web app user interface updated in real-time taking advantage of the Google Maps API. The web app is built using the Vue.js framework to process the real-time data. Other areas of the web app such as the front-end design are built using the HTML5, CSS3, and Javascript. Additional features were implemented using Bootstrap, and ApexGraphs. **Hardware:** To accomplish the goals of FireGuard, we chose to use the NodeMCU which is an IoT (Internet of Things) enabled microcontroller built on top of the Arduino framework. We attached several sensors to the board to help us gather the vital environmental data needed to be sent to Google Cloud's Firebase Database. Some of these sensors included the DHT11 Temperature and Humidity Sensor, CCS811 Air Quality Sensor, VEML 6070 UV Index Sensor, SparkFun Soil Moisture Sensor, and the TCS34725 RGB Sensor. This combination of sensors and an IoT enabled microcontroller enables us to gather and upload data in a lightweight and efficient format, reducing latency and overhead in emergency situations. **Data Base:** The data is received by the NodeMCU microcontroller is sent to a real-time Google Firebase database where it is categorized by sensor and type of data. # Impacts of Forest Fires As mentioned above, over a billion animals have died and 25 million acres of flora and fauna have gone alongside through the Australian fires. However, besides these impacts there are quite a few additional environmental impacts that come with forest fires including the use of chemicals in firefighting and heavy smog can lead to further levels of poisoning in local water systems and such.Increased carbon release that only exacerbates the presence of greenhouse gases and subsequently climate change. There are also a number of health impacts that come along with forest fires, some of which are quite obvious as the lowered air Quality as fine particulates bring about more respiratory problems. In fact, healthy firefighters can feel impacts for over 4 months after fighting fires. What most people fail to realize unless in a situation similar is that forest fires are quite traumatic and lead to significant levels of PTSD or other related mental illnesses. Evacuation and the amount of stress that come with are correlated to higher levels of mental health issues. The loss of homes, families and people is very traumatic. Moreover, during and after fighting fires there are concerns for food Safety and water quality. High levels of toxins and chemicals exist in the air that can find themselves running into water systems. # What's Next for Fire-Guard In the future, the data received from the multiple nodes around the world will give us enough training data to improve our machine learning algorithm. We hope to have enough data to better account for particular local climates in order to remove the possibility of false positives. We also hope that this data can be used to migrate to another model which can even be used for predicting forest fires before they arise.
winning
## We are DonSafe, a Blockchain and AI based organ donation interface, aimed at patient-centric care ### DonSafe provides a platform for organisations, donors and recipients to ethically source and donate organs. ## Inspiration The transplantation of healthy organs into persons whose own organs have failed, improves and saves thousands of lives every year. But demand for organs has outstripped supply, creating an underground market for illicitly obtained organs. Desperate situations of both recipients and donors create an avenue ready for exploitation by international organ trafficking syndicates. Traffickers exploit the desperation of donors to improve the economic situation of themselves and their families, and they exploit the desperation of recipients who may have few other options to improve or prolong their lives. Like other victims of trafficking in persons, those who fall prey to traffickers for the purpose of organ removal may be vulnerable by virtue of poverty, for instance. Organ trafficking is more than a $5B market annually. DonSafe sets out to try to solve the primary problems with organ donation and transplantation as described. ## What it does DonSafe has a three way clientele system, and utilises machine learning to first match donors and recipients at scale, anywhere in the world, followed by Stacks blockchain, which is used to securely authenticate organ transfers, including securing the identities and use-case of the donor, recipient and transplant organisation: 1) **Donor**: Simply set up a user account, and list organ to donate, with personal details about the donor and organ, all authenticated on the Stacks blockchain. The app uses machine learning to determine the authenticity of the listing, and make sure it is *ethically sourced*. 2) **Middle/Transplant organisation**: Takes the listing and is in charge of the transplant, usually a hospital. 3) **Recipient**: Simply set up a user account, and list organ to receive, and a timeline, with personal details about the recipient and organ, all authenticated on the Stacks blockchain. Further verification is done on the part of the health institution. The app uses machine learning to determine the authenticity of the listing, and make sure it is *ethically received* i.e. the organ will actually be transplanted, rather than trafficked, for instance. ## How we built it 1) Clarity for Stacks Blockchain for organ transplant authentication 2) Java and Kotlin for the Android app 3) Firebase for the database backend/and user authentication into the app 4) SciKit, Firebase and own Bayesian models for machine learning input ## Ethics We believe that access to healthcare is a basic human right, and that it is ethically wrong for us as a society to not act against the problem of the lack of medical care. ## Accomplishments that we're proud of The all round social impact of the scale of the app, its features as well as who we can help in real time. Also, that it was made in less than 36 hours. ## What's next for DonSafe The idea is to improve functionality, removing bugs, while ensuring an improvement of the use of our machine learning algorithm. DonSafe would ideally also be able to incorporate payments/financial transactions on DeFi, so that promises are fulfilled, without moral hazards.
## Inspiration With the increase in Covid-19 cases, the healthcare sector has experienced a shortage of PPE supplies. Many hospitals have turned to the public for donations. However, people who are willing to donate may not know what items are needed, which hospitals need it urgently, or even how to donate. ## What it does Corona Helping Hands is a real-time website that sources data directly from hospitals and ranks their needs based on bed capacity and urgency of necessary items. An interested donor can visit the website and see the hospitals in their area that are accepting donations, what specific items, and how to donate. ## How we built it We built the donation web application using: 1) HTML/ CSS/ Bootstrap (Frontend Web Development) 2) Flask (Backend Web Development) 3) Python (Back-End Language) ## Challenges we ran into We ran into issues getting integrating our map with the HTML page. Taking data and displaying it on the web application was not easy at first, but we were able to pull it off at the end. ## Accomplishments that we're proud of None of us had a lot of experience in frontend web development, so that was challenging for all of us. However, we were able to complete a web application by the end of this hackathon which we are all proud of. We are also proud of creating a platform that can help users help hospitals in need and give them an easy way to figure out how to donate. ## What we learned This was most of our first times working with web development, so we learned a lot on that aspect of the project. We also learned how to integrate an API with our project to show real-time data. ## What's next for Corona Helping Hands We hope to further improve our web application by integrating data from across the nation. We would also like to further improve on the UI/UX of the app to enhance the user experience.
## Inspiration While talking to Mitt from the CVS booth, he opened my eyes to a problem that I was previously unaware - counterfeits in the pharmaceutical industry. After a good amount of research, I learned that it was possible to make a solution during the hackathon. A friendly interface with a blockchain backend could track drugs immutably, and be able to track the item from factory to the consumer means safer prescription drugs for everyone. ## What it does Using our app, users can scan the item, and use the provided passcode to make sure that item they have is legit. Using just the QR scanner on our app, it is very easy to verify the goods you bought, as well as the location the drugs were manufactured. ## How we built it We started off wanting to ensure immutability for our users; after all, our whole platform is made for users to trust the items they scan. What came to our minds was using blockchain technology, which would allow us to ensure each and every item would remain immutable and publicly verifiable by any party. This way, users would know that the data we present is always true and legitimate. After building the blockchain technology with Node.js, we started working on the actual mobile platform. To create both iOS and Android versions simultaneously, we used AngularJS to create a shared codebase so we could easily adapt the app for both platforms. Although we didn't have any UI/UX experience, we tried to make the app as simple and user-friendly as possible. We incorporated Google Maps API to track and plot the location of where items are scanned to add that to our metadata and added native packages like QR code scanning and generation to make things easier for users to use. Although we weren't able to publish to the app stores, we tested our app using emulators to ensure all functionality worked as intended. ## Challenges we ran into Our first challenge was learning how to build a blockchain ecosystem within a mobile app. Since the technology was somewhat foreign to us, we had to learn the in and outs of what "makes" a blockchain and how to ensure its immutability. After all, trust and security are our number one priorities and without them, our app was meaningless. In the end, we found a way to create this ecosystem and performed numerous unit tests to ensure it was up to industry standards. Another challenge we faced was getting the app to work in both iOS and Android environments. Since each platform had its set of "rules and standards", we had to make sure that our functions worked in both and that no errors were engendered from platform deviations. ## What's next for NativeChain We hope to expand our target audience to secondhand commodities and the food industry. In today's society, markets such as eBay and Alibaba are flooded with counterfeit luxury goods such as clothing and apparel. When customers buy these goods from secondhand retailers on eBay, there's currently no way they can know for certain whether that item is legitimate as they claim; they solely rely on the seller's word. However, we hope to disrupt this and allow customers to immediately view where the item was manufactured and if it truly is from Gucci, rather than a counterfeit market in China. Another industry we hope to expand to is foods. People care about where the food they eat comes from, whether it's kosher and organic and non-GMO. Although the FDA regulates this to a certain extent, this data isn't easily accessible by customers. We want to provide a transparent and easy way to users to view the food they are eating by showing them data like where the honey was produced, where the cows were grown, and when their fruits were picked. Outbreaks such as the Chipotle Ecoli incident can be pinpointed as they can view where the incident started and to warn customers to not eat food coming from that area.
partial
## Inspiration The idea for SlideForge came from the struggles researchers face when trying to convert complex academic papers into presentations. Many academics spend countless hours preparing slides for conferences, lectures, or public outreach, often sacrificing valuable time they could be using for research. We wanted to create a tool that could automate this process while ensuring that presentations remain professional, audience-friendly, and adaptable to different contexts. ## What it does SlideForge takes LaTeX-formatted academic papers and automatically converts them into well-structured presentation slides. It extracts key content such as equations, figures, and citations, then organizes them into a customizable slide format. Users can easily adjust the presentation based on the intended audience—whether it’s for peers, students, or the general public. The platform provides customizable templates, integrates citations, and minimizes the time spent on manual slide creation. ## How we built it We built SlideForge using a combination of Python for the backend and JavaScript with React for the frontend. The backend handles the LaTeX parsing, converting key elements into slides using Flask to manage the process. We also integrated JSON files to store and organize the structure of presentations, formulas, and images. On the frontend, React is used to create an interactive user interface where users can upload their LaTeX files, adjust presentation settings, and preview the output. ## Challenges we ran into One of the biggest challenges we faced was ensuring that the LaTeX parser could accurately extract and format complex equations and figures into slide-friendly content. Maintaining academic rigor while making the content accessible to different audiences also required a lot of trial and error with the customizable templates. Finally, integrating the backend and frontend in a way that made the process seamless and efficient posed technical hurdles that required collaboration and creative problem-solving. ## Accomplishments that we're proud of We’re proud of the fact that SlideForge significantly reduces the time required for researchers to create professional presentations. What used to take hours can now be done in minutes. We’re also proud of the adaptability of our templates, which allow users to target different audiences without needing to redesign their slides from scratch. Additionally, the successful integration of LaTeX parsing and slide generation is a technical achievement we’re particularly proud of. ## What we learned Throughout this project, we learned a lot about LaTeX and how to parse and handle its complex structures programmatically. We also gained a deeper understanding of user experience design, ensuring that our platform was both intuitive and powerful. From a technical standpoint, integrating the backend and frontend and ensuring smooth communication between the two taught us valuable lessons in full-stack development. ## What's next for SlideForge Next, we plan to expand SlideForge’s functionality by adding more customization options for users, such as advanced styling and animation features. We’re also looking into integrating cloud storage solutions so users can save and edit their presentations across devices. Additionally, we hope to support more document formats beyond LaTeX, making SlideForge a universal tool for academics and professionals alike.
## Inspiration Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on. ## What it does StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with! ## How we built it We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort. ## Challenges we ran into The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle. ## Accomplishments that we're proud of That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features. ## What we learned A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product. ## What's next for StickyAR StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
# Inspiration ✨ It's a universally acknowledged truth that memories are the tapestries of our lives, yet, tragically, many of these vibrant threads fade over time, leaving us with a canvas that feels incomplete. This realization hit us hard, echoing the sentiment that the essence of our experiences, the laughter, the tears, the triumphs, and the losses, should not be relegated to the shadows of our minds. It was from this poignant understanding that EyeRemember was born—a beacon of hope in the quest to preserve the sanctity of our memories. # What It Does 🌐 EyeRemember is not just an app; it's a revolution—a virtual reality gallery that transforms your memories and collectibles from mere items into immersive experiences. With EyeRemember, you don't just view your memories; you step into them, reliving each moment in a vivid 3D VR Museum World. This innovative platform allows users to navigate through their cherished memories and collectibles with the simplicity of their gaze, eliminating the barriers between the user and their past, making every interaction a journey back in time. # How We Built It 🔧 The construction of EyeRemember was an odyssey of technological exploration and creativity. We embarked on this journey with the VisionOS SDK as our compass, guiding us through the complexities of virtual reality development. Our voyage took us to the shores of the Meta Quest 2, where we meticulously sideloaded and demoed our application, each step a testament to our dedication to innovation and our unwavering belief in the power of VR to transform how we connect with our past. Main Technologies: Swift, VisionOS SDK, React, Meta Quest 2 # Challenges We Ran Into 🚀 Our journey was not without its trials and tribulations. The task of loading entire VR worlds presented a Herculean challenge, pushing us to the limits of our coding capabilities. The intricacies of Swift added layers of complexity to our endeavor, requiring us to adapt, learn, and grow. The process of streaming the VisionOS program and sideloading it onto the Meta Quest 2 was akin to navigating a labyrinth, where each turn revealed new challenges and opportunities for growth. # Accomplishments That We're Proud Of 🏅 Standing at the frontier of VR coding and Swift programming as novices, we ventured forth with courage and determination. Our first foray into this unexplored territory was not just an accomplishment but a declaration of our passion for innovation and our commitment to pushing the boundaries of what is possible. We emerged from this experience not just as developers, but as pioneers of a new frontier in technology. # What We Learned 📘 This expedition into the realms of VR and Swift was illuminating, to say the least. We learned that the essence of innovation lies not in the mastery of skills, but in the courage to face the unknown, the resilience to overcome challenges, and the vision to see beyond the horizon. These lessons, learned in the crucible of development, will guide us as we continue our journey with EyeRemember. # What's Next For EyeRemember🌟 The saga of EyeRemember is just beginning. Our vision for the future is bold and boundless. We see EyeRemember evolving into a global platform that not only preserves memories but enriches them, making it possible for users to not just revisit the past but to experience it with a depth and clarity that was previously unimaginable. Our mission is clear: to innovate, to inspire, and to illuminate the path to a future where every memory is preserved, every moment cherished, and every experience shared. Join us on this exhilarating journey as we continue to explore the infinite possibilities of virtual reality, memory preservation, and interactive storytelling. With EyeRemember, the future of how we remember and relive our past is bright, boundless, and breathtaking. 🚀💖
winning
## Inspiration The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike. ## How it Works We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib. ## Challenges We Ran Into A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end. ## The Future for 'For Glory' Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics.
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration Our team really wanted to create a new way to maximize productivity without the interference of modern technology. we often find ourselves reaching for our phones to scroll through social media for just a "5 minute break" which quickly turns into a 2 hours procrastination session. On top of that, we wanted the motivation to be delivered in a sarcastic/funny way. Thus, we developed a task manager app that bullies you into working. ## What it does The app allows you to create a to-do list of tasks that you can complete at any time. Once you decide to start a task, distracting yourself with other applications is met with reinforcement to get you back to work. The reinforcement is done through text and sound based notifications. Not everyone is motivated in the same way, thus the intensity of the reinforcement can be calibrated to the user's personal needs. The levels include: Encouragement, Passive-Aggression and Bullying. ## How we built it We built the our project as a mobile app using Swift and Apple's SwiftUI and UserNotification frameworks. Development was done via Xcode. The app is optimized for IOS 16. ## Challenges we ran into Learning how to code in Swift. Our team did not have a lot of experience in mobile IOS development. Since we were only familiar with the basics, we wanted to include more advanced features that would force us to integrate new modules and frameworks. ## Accomplishments that we're proud of Having a product we are proud enough to demo. This is the first time anyone in our team is demoing. We spent extra time polishing the design and including animations. We wanted to deliver an App that felt like a complete product, and not just a hack, even if the scope was not very large. ## What we learned We learned front end in Swift (SwiftUI) including how to make animations. A lot about data transfer and persistence in IOS applications. And the entire development cycle of building a complete and kick-ass Application. ## What's next for TaskBully? * Incorporate a scheduling/deadline feature to plan when to complete tasks. * Include an achievement system based around successfully completing tasks. * Implement even more custom sounds for different intensity levels. * Add a social feature to share success with friends. **A message from TaskBully:** Here at TaskBully, our vast team of 2 employees is deeply committed to the goal of replacing bullying with motivation. We are actively looking for sponsorships, investments, and growth opportunities until we can eventually eradicate procrastination.
winning
## Inspiration We thought it would be cool to use machine learning tools on current events to get a sense of what people are thinking about a particular topic. Specifically, our interest in finance inspired us to create this tool to analyze stock tickers in real time. ## What it does This app takes in any topic or keyword, sanitizes it, and uses an api to fetch an rss feed of news related to that topic. We then use an api to convert this data to json and grab the content from the news articles. We use a Bayes model ML algorithm to determine the positive or negative sentiment of each news article, and return the average score as a percentage from -100% to 100% as completely negative or positive respectively. ## How we built it HTML5/CSS3 front-end. Back-end uses Javascript and JQuery on a serverless platform, utilizing multiple REST APIs to perform functions. ## Challenges we ran into Fetching news articles was hard because there was no direct API. We had to grab the rss and convert it to JSON ourselves. There was also no easily available REST API for machine learning, so we just implemented our own Bayes algorithm based on existing positive and negative training data available online. ## Accomplishments that we're proud of Creating an app that has a complex algorithm running in the back-end while simultaneously creating a very clean, user-friendly front-end ## What we learned Machine Learning skills ## What's next for PRogativ Improving the training model to make it more accurate. Using social media feeds as part of the public sentiment algorithm.
## Inspiration The media we consume daily has an impact on our thinking, behavior, and emotions. If you’ve fallen into a pattern of regularly watching or listening to the news, the majority of what you’re consuming is likely about the coronavirus (COVID-19) crisis. And while staying up to date on local and national news, especially as it relates to mandates and health updates, is critical during this time, experts say over-consumption of the news can take a toll on your physical, emotional, and mental health. ## What it does The app first greets users with a screen prompting them to either sign up for an account or sign in to a pre-existing account. With the usual authentication formalities out of the way the app gets straight to business as our server scrapes oodles of articles from the internet and filters out the good from the bad, before displaying the user with a smorgasbord of good news. ## How we built it We have used flutter to create our android based application and used firebase as a database. ExpressJS as a backend web framework. With the help of RapidAPI, we are getting lists of top headline news. ## Challenges we ran into Initially, we tried to include Google Cloud-Based Sentiment Analysis of each news. However, we thought to try some new technology. Since the majority of our team members were new to machine learning, we were facing too many challenges to even get started with. Issues with lack of examples available. So we again limited our app to show customized positive news. We wanted to add more features during the hacking period but due to time constraints, we had to limit. ## Accomplishments that we're proud of Completely Working android based applications and integrated with backend having the contribution of each and every member of the team. ## What we learned We have learned to fetch and upload data to firebase's real-time database through the flutter application. We learned the value of Team Contribution and Team Work which is the ultimate key to the success of the project. Using Text-based Sentiment Analysis to analyze and rank news on the basis of positivity through Cloud Natural Language Processing. ## What's next for Hopeful 1. More Customized Feed 2. Update Profile Section 3. Like and Reply to comments
## Inspiration Want to see how a product, service, person or idea is doing in the court of public opinion? Market analysts are experts at collecting data from a large array of sources, but monitoring public happiness or approval ratings is notoriously difficult. Usually, focus groups and extensive data collection is required before any estimates can be made, wasting both time and money. Why bother with all of this when the data you need can be easily mined from social media websites such as Twitter? Through aggregating tweets, performing sentiment analysis and visualizing the data, it would be possible to observe trends on how happy the public is about any topic, providing a valuable tool for anybody who needs to monitor customer satisfaction or public perception. ## What it does Queries Twitter Search API to return relevant tweets that are sorted into buckets of time. Sentiment analysis is then used to categorize whether the tweet is positive or negative in regards to the search term. The collected data is visualized with graphs such as average sentiment over time, percentage of positive to percentage of negative tweets, and other in depth trend analyses. An NLP algorithm that involves the clustering of similar tweets was developed to return a representative summary of good and bad tweets. This can show what most people are happy or angry about and can provide insight on how to improve public reception. ## How we built it The application is split into a **Flask** back-end and a **ReactJS** front-end. The back-end queries the Twitter API, parses and stores relevant information from the received tweets, and calculates any extra statistics that the front-end requires. The back-end then provides this information in a JSON object that the front-end can access through a `get` request. The React front-end presents all UI elements in components styled by [Material-UI](https://material-ui.com/). [React-Vis](https://uber.github.io/react-vis/) was utilized to compose charts and graphs that presents our queried data in an efficient and visually-appealing way. ## Challenges we ran into Twitter API throttles querying to 1000 tweets per minute, a number much less than what this project needs in order to provide meaningful data analysis. This means that by itself, after returning 1000 tweets we would have to wait another minute before continuing to request tweets. With some keywords returning hundreds of thousands of tweets, this was a huge problem. In addition, extracting a representative summary of good and bad tweet topics was challenging, as features that represent contextual similarity between words are not very well defined. Finally, we found it difficult to design a user interface that displays the vast amount of data we collect in a clear, organized, and aesthetically pleasing manner. ## Accomplishments that we're proud of We're proud of how well we visualized our data. In the course of a weekend, we managed to collect and visualize a large sum of data in six different ways. We're also proud that we managed to implement the clustering algorithm. In addition, the application is fully functional with nothing manually mocked! ## What we learned We learnt about several different natural language processing techniques. We also learnt about the Flask REST framework and best practices for building a React web application. ## What's next for Twitalytics We plan on cleaning some of the code that we rushed this weekend, implementing geolocation filtering and data analysis, and investigating better clustering algorithms and big data techniques.
losing
## Inspiration Public speaking is a critical skill in our lives. The ability to communicate effectively and efficiently is a very crucial, yet difficult skill to hone. For a few of us on the team, having grown up competing in public speaking competitions, we understand too well the challenges that individuals looking to improve their public speaking and presentation skills face. Building off of our experience of effective techniques and best practices and through analyzing the speech patterns of very well-known public speakers, we have designed a web app that will target weaker points in your speech and identify your strengths to make us all better and more effective communicators. ## What it does By analyzing speaking data from many successful public speakers from a variety industries and backgrounds, we have established relatively robust standards for optimal speed, energy levels and pausing frequency during a speech. Taking into consideration the overall tone of the speech, as selected by the user, we are able to tailor our analyses to the user's needs. This simple and easy to use web application will offer users insight into their overall accuracy, enunciation, WPM, pause frequency, energy levels throughout the speech, error frequency per interval and summarize some helpful tips to improve their performance the next time around. ## How we built it For the backend, we built a centralized RESTful Flask API to fetch all backend data from one endpoint. We used Google Cloud Storage to store files greater than 30 seconds as we found that locally saved audio files could only retain about 20-30 seconds of audio. We also used Google Cloud App Engine to deploy our Flask API as well as Google Cloud Speech To Text to transcribe the audio. Various python libraries were used for the analysis of voice data, and the resulting response returns within 5-10 seconds. The web application user interface was built using React, HTML and CSS and focused on displaying analyses in a clear and concise manner. We had two members of the team in charge of designing and developing the front end and two working on the back end functionality. ## Challenges we ran into This hackathon, our team wanted to focus on creating a really good user interface to accompany the functionality. In our planning stages, we started looking into way more features than the time frame could accommodate, so a big challenge we faced was firstly, dealing with the time pressure and secondly, having to revisit our ideas many times and changing or removing functionality. ## Accomplishments that we're proud of Our team is really proud of how well we worked together this hackathon, both in terms of team-wide discussions as well as efficient delegation of tasks for individual work. We leveraged many new technologies and learned so much in the process! Finally, we were able to create a good user interface to use as a platform to deliver our intended functionality. ## What we learned Following the challenge that we faced during this hackathon, we were able to learn the importance of iteration within the design process and how helpful it is to revisit ideas and questions to see if they are still realistic and/or relevant. We also learned a lot about the great functionality that Google Cloud provides and how to leverage that in order to make our application better. ## What's next for Talko In the future, we plan on continuing to develop the UI as well as add more functionality such as support for different languages. We are also considering creating a mobile app to make it more accessible to users on their phones.
## Inspiration If you're lucky enough to enjoy public speaking, we're jealous of you. None of us like public speaking, and we realized that there are not a lot of ways to get real-time feedback on how we can improve without boring your friends or family to listen to you. We wanted to build a tool that would help us practice public-speaking - whether that be giving a speech or doing an interview. ## What it does Stage Fight analyzes your voice, body movement, and word choices using different machine learning models in order to provide real-time constructive feedback about your speaking. The tool can give suggestions on whether or not you were too stiff, used too many crutch words (umm... like...), or spoke too fast. ## How we built it Our platform is built upon the machine learning models from Google's Speech-to-Text API and using OpenCV and trained models to track hand movement. Our simple backend server is built on Flask while the frontend is built with no more than a little jQuery and Javascript. ## Challenges we ran into Streaming live audio while recording from the webcam and using a pool of workers to detect hand movements all while running the Flask server in the main thread gets a little wild - and macOS doesn't allow recording from most of this hardware outside of the main thread. There were lots of problems where websockets and threads would go missing and work sometimes and not the next. Lots of development had to be done pair-programming style on our one Ubuntu machine. Good times! ## Accomplishments that we're proud of Despite all challenges, we overcame them. Some notable wins include stringing all components together, using efficient read/writes to files instead of trying to fix WebSockets, and cool graphs. ## What we learned A lot of technology, a lot about collaboration, and the Villager Puff matchup (we took lots of Smash breaks).
## Inspiration Nowadays, as the amount of repairman scam is growing, it is extremely hard to find some quality and honest technicians that are reasonably priced. Whether people are looking for a car mechanic or a home repair handyman, many people have experienced overpriced and low-quality services. We see that different decentralized platforms made people’s lives convenient so we believe that this would be a good idea to approach. We want people to not stress about getting scam when finding technicians so we decided to build a platform about it. ## What it does Our website facilitates searching for all kinds of technicians in a reliable way. It is basically Uber for technicians, users don't have to wait for days to make an appointment and weeks to get things done. Just a few clicks on the app and a few minutes, their roof is done renovating. We allow both searchers and technicians to review each others after every job so that users and technicians can see what to expect in the job and decide whether to continue with the job. We also verify technicians with official licenses so that people can safely use the services. The website calculates the average ratings for each service provider as well as the average rating of everyone to provide a correct rating scale. ## How we built it We had a Mongo Atlas database where we store all of the non relative data, such as user records, comments, reviews, service providers, service postings, etc. Using express.js framework, we created a microservice architecture that allows us to create REST API endpoints. We deployed our micro services onto Google app engine. Furthermore, we developed our frontend solution using react and bootstrap where we made the application user friendly. ## Challenges we ran into React was particularly challenging for us as our knowledge in React was very brief, hence lots of problems arise and it took us a very long time to debug them. Initially we were thinking of using Firebase Functions, but it did not work after several hours of trying and decided to shift to alternative tech stacks instead. We also ran into some issue of CORS which blocked our access from the browser so we could not do any POST request. We had to change them to GET request and put the data in the query section. ## Accomplishments that we're proud of We are happy that we are able to get everything done for the hackathon. We are proud of ourselves that we learned those in a very short period of time so that we can apply them in the project. For some of us, we haven’t used some if them before so we learned them through online tutorials during the hackathon. We are proud that we built a fully functional web app that consists of database infrastructure. ## What we learned We are proud to state that we learned so much from our application. For instance, most of us were unfamiliar with using backend technologies such as Express.js and Google Map API as well as frontend technologies such as React. We also learned how everything can be pieced together - tying the frontend and backend together. We learned to work together technically and effectively through collaboration on this project. Also, we learned that it is very important to write clean code that is easy to read by teammates and others. ## What's next for WorkBounce We hope to improve the UI for the application since we did not have enough resources for that. Furthermore, we would add more features to the web app so that it would be easier to use and allow people to get more from WorkBounce. Specifically, we want to allow users and technicians to be able to chat safely and technically before the job. Connecting them through text message and phone calls might not be safe so adding a chat box that guides them to a solution in the web app might be a good idea. Due to the time limitation, there was so much we could do, but we are proud to have finished the product, connecting our frontend to the backend.
partial
## Inspiration We wanted to create a webapp that will help people learn American Sign Language. ## What it does SignLingo starts by giving the user a phrase to sign. Using the user's webcam, gets the input and decides if the user signed the phrase correctly. If the user signed it correctly, goes on to the next phrase. If the user signed the phrase incorrectly, displays the correct signing video of the word. ## How we built it We started by downloading and preprocessing a word to ASL video dataset. We used OpenCV to process the frames from the images and compare the user's input video's frames to the actual signing of the word. We used mediapipe to detect the hand movements and tkinter to build the front-end. ## Challenges we ran into We definitely had a lot of challenges, from downloading compatible packages, incorporating models, and creating a working front-end to display our model. ## Accomplishments that we're proud of We are so proud that we actually managed to build and submit something. We couldn't build what we had in mind when we started, but we have a working demo which can serve as the first step towards the goal of this project. We had times where we thought we weren't going to be able to submit anything at all, but we pushed through and now are proud that we didn't give up and have a working template. ## What we learned While working on our project, we learned a lot of things, ranging from ASL grammar to how to incorporate different models to fit our needs. ## What's next for SignLingo Right now, SignLingo is far away from what we imagined, so the next step would definitely be to take it to the level we first imagined. This will include making our model be able to detect more phrases to a greater accuracy, and improving the design.
## **1st Place!** ## Inspiration Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free. ## What it does Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson. ## How we built it We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures. ## Challenges we ran into We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js. ## Accomplishments that we're proud of We are proud that we managed to come up with so many ideas in such little time. ## What we learned Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon. ## What's next for SignSpeak SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people.
## Inspiration In Canada alone, there are over 350,000 Canadians who are deaf and around another 3 million who are hard of hearing. However, picking up sign language for them can often be a challenge, especially for deaf children who are often born to hearing parents who often don't know sign language. The inability to communicate with and understand their peers can lead to isolation and loneliness, effectively walling them off from on of the joys of life - communication. We aim to fix this problem by not only enhancing the sign language learning process for those who are hearing impaired, but also encouraging those who are sound of hearing to pick up sign language - breaking down the communication barrier between hearing impaired people and their peers. ## What it does Our application leverages the power of AI and neural networks to not only detect and identify sign language gestures in real-time, but also provide feedback to users so they can learn more effectively from their mistakes. We also combine this technology with engaging and interactive lessons, ensuring that learning sign language is not only effective but also enjoyable. ## How we built it To detect hand gestures, used Python and the OpenCV library to format the images being sent through the user's webcam and MediaPipe and SciKit to detect hands gestures and predict the symbol being signed. For the Frontend, we mainly used React.js and Tailwind for the UI and CSS respectively. Finally, for the Backend, we used Express.js and Flask to handle requests from the React application and Python machine learning model respectively. ## Challenges we ran into Training the model was a big problem as we spent a lot of time near the start trying to find a pretrained model. However, all of the pretrained models we found had very little documentation, so we weren't able to find out how to use them. We only resorted to building and training our own model very late into the hackathon, giving us very little time to make sure it meshed well with the rest of our project. We spent a lot of time dealing with React's async functions and also had a lot of trouble deploying our application. ## Accomplishments that we're proud of We are proud of being able to accomplish what we've accomplished given the short time frame and our smaller group size. ## What we learned To not get stuck trying to fix a single stupid bug for hours and instead move on. ## What's next for Silyntax We aim to allow Silyntax to not only be able to recognize gestures through singular frames, but also through chaining together multiple frames into larger movements, allowing the detection of more complex gestures. We also aim to implement more game modes, such as a mode where players are given a sequence of letters/words and have to compete with one another to see who signs the sequence the fastest (kind of like Typeracer), and also a maze game mode where the player has to sign different words to move around and navigate through the maze.
winning
## Inspiration Epidemiology is critical in figuring out how to stop the spread of diseases before it’s too late ## What it does ClassiFly uses image data to classify individuals with known disease symptoms. For demonstration purposes, we selected Yellow Fever and Methicillin-resistant Staphylococcus aureus, and Eelephantiasis. ## How I built it The app was developed in Swift, and the classification model was trained using a split data classifier method, which leveraged Apple's native CreateMLUI framework to build an image classifier model with 89% accuracy. ## Challenges We ran into We initially planned on building an autonomous drone for tracking that could be used to identifying certain key epidemiological characteristics in a medically unsafe and infected region. That is, this would effectively increase accessibility to remote areas that are susceptible to infection. However, there was no clear way to interface with the drone via an API so we decided to simply build a classification app that would allow you to use a drone image taken in a contaminated area and derive certain key epidemiological insights. ## Accomplishments that I'm proud of I am proud that we were able to successfully work together efficiently to build an image classification app. ## What I learned We learned how we navigate managing development projects as a team, as well as how to leverage really powerful computer vision capabilities with CoreML. ## What's next for ClassiFly What if we could have an army of medical detectives in the sky, able to reach the most remote populations? Briefly: Navigate to remote areas, collect image data of populus, use Machine Learning to classify afflictions based on visible symptoms. This paints a better picture of the disease landscape much faster than any human observation.
## Inspiration We were inspired by the impact plants have on battling climate change. We wanted something that not only identifies and gives information about our plants, but also provides an indication about what others think about the plant. ## What it does You can provide an image of a plant, either by uploading a local image or URL to an image. It takes the plant image and matches it with a species, giving you a similarity score, a scientific name, as well as common names. You can click on it to open a modal that displays more information, including the sentiment analysis of its corresponding Wikipedia page as well as a more detailed description of the plant. ## How we built it We used an API from [Pl@ntNet](https://identify.plantnet.org/) that utilizes image recognition to identify plants. To upload the image, we needed to provide a link to the image path as a parameter. In order to make this compatible with locally uploaded images, we first saved these into Firebase. Then, we passed the identified species through an npm web scraping library called WikiJS to pull the text content from the Wikipedia page. Finally, we used Google Cloud's Natural Language API to perform sentiment analysis on Wikipedia. ## Challenges we ran into * Finding sources that we can perform sentiment analysis on our plant * Being able to upload a local image to be identified, which we resolved using Firebase * Finding the appropriate API/database for plants * Connecting frontend with backend ## Accomplishments that we're proud of * Trying out Firebase and Google Cloud for the first time * Learning frontier image recognition and NLP softwares * Integrating API's to gather data * Our beautiful UI ## What we learned * How to manage and use the Google Cloud's Natural Language API and the PlantNet API * There are a lot of libraries and API's that already exist to make our life easier ## What's next for Plantr * Find ways to get carbon sequestration data about the plant * Apply sentiment analysis on blog posts about plants to obtain better data
## Inspiration Imagine you broke your EpiPen but you need it immediately for an allergic reaction. Imagine being lost in the forest with cut wounds and bleeding from a fall but have no first aid kit. How will you take care of your health without nearby hospitals or pharmacies? Well good thing for you, we have **MediFly**!! MediFly is inspired by how emergency vehicles such as ambulances take too long to get to the person in need of aid because of other cars on the road and traffic. Every second spent waiting is risking someone's life. So in order to combat that issue, we use **drones** as the first emergency responders to send medicine to save people's lives or keep them in a stable condition before human responders arrive. ## What it does MediFly allows the user to request for emergency help or medication such as an Epipen and Epinephrine. First you download the MediFly app and create a personal account. Then you can log into your account and use the features when necessary. If you are in an emergency, press the "EMERGENCY" button and a list of common medication options will appear for the person to pick from. There is also an option to search for your needed medication. Once a choice is selected, the local hospital will see the request and send a drone to deliver the medication to the person. Human first responders will also be called. The drone will have a GPS tracker and a GPS location of the person it needs to send the medication to. When the drone is within close distance to the person, a message is sent to tell them to go outside to where the drone can see the person. The camera will use facial recognition to confirm the person is indeed the registered user who ordered the medication. This level of security is important to ensure that the medication is delivered to the correct person. When the person is confirmed, the medication holding compartment lid is opened so the person can take their medication. ## How we built it On the software side, the front end of the app was made with react coded in Javascript, and the back end was made with Django in Python. The text messages work through Twilio. Twilio is used to tell the user that the drone is nearby with the medication ready to hand over. It sends a message telling the person to go outdoors where the drone will be able to find the user. On the hardware side, there are many different components that make up the drone. There are four motors, four propeller blades, a electronic speed controller, a flight controller, and 3D printed parts such as the camera mount, medication box holder, and some components of the drone frame. Besides this there is also a Raspberry Pi SBC attached to the drone for controlling the on-board systems such as the door to unload the cargo bay and stream the video to a server to process for the face recognition algorithm. ## Challenges we ran into Building the drone from scratch was a lot harder than we anticipated. There was a lot of setting up that needed to be done for the hardware and the building aspect was not easy. It consisted of a lot of taking apart, rebuilding, soldering, cutting, hot gluing, and rebuilding. Some of the video streaming systems did not work well at first, due to the CORS blocking the requests, given that we were using two different computers to run two different servers. Traditional geolocation techniques often take too long - as such, we needed to build a scheme to cache a user's location before they decided to send a request to prevent lag. Additionally, the number of pages required to build, stylize, and connect together made building the site a notable challenge of scale. ## Accomplishments that we're proud of We are extremely proud of the way the drone works and how it's able to move at quick, steady speeds while carrying the medication compartment and battery. On the software side, we are super proud of the facial recognition code and how it's able to tell the difference between different peoples' faces. The front and back end of the website/app is also really well done. We first made the front end UI design on Figma and then implemented the design on our final website. ## What we learned For software we learned how to use React, as well as various user authorization and authentication techniques. We also learned how to use Django. We learnt how to build an accurate, efficient and resilient face detection recognition and tracking system to make sure the package is always delivered to the correct person. We experimented with and learned various ways to stream real-time video over a network, also over longer ranges for the drone. For hardware we learned how to set up and construct a drone from scratch! ## What's next for MediFly In the future we hope to add a GPS tracker to the drone so that the person who orders the medication can see where the drone is on its path. We would also add Twilio text messages so that when the drone is within a close radius to the user, it will send a message notifying the person to go outside and wait for the drone to deliver the medication.
partial
## HouseSearch - A NWHacks 2019 Project Contributors: Stephen Yang and Justin Aujla ### Inspiration & Goals: In the hectic environment of our modern day lives, buying a home has never been a more challenging and time-consuming task... At least, so I've been told. Hopefully, HouseSearch can help expedite the process by sorting houses by location, thus increasing affordability and convenience in the house buying process. ### What It Does Instead of directly sorting houses by price or number of bedrooms, we mainly sort houses by location. By inputting one's workplace, school, hobby locations, sports facilities, preferred parks and more, we can approximate the most economical home purchase with regards to travel distance and cost. ### How We Built It We mainly used Python, HTML, and JS, and also the Google Maps API and RetsRabbit API. ### Challenges We Ran Into Communicating between JavaScript and Python via POST and GET requests was a challenge for us, and understanding the Python requests library helped a lot with this. As well, the API's we used required lots of reading and exploring references, which was frustrating at times. ### Accomplishments that we're proud of For us, this was our first 24-hour hackathon. We are proud (and surprised) that we were able to stay focused throughout the night to work on the project. ### What We Learned Documentation and references are your best friends because they actually tell you how API's and Libraries work. ### What's next for HouseSearch We want to implement machine learning or more advanced mathematical concepts to more realistically choose a house based on user input locations. We would also like to increase the number of user inputs our program receives to better suit the user's needs.
## Inspiration Having experienced a language barrier firsthand, witnessing its effects in family, and reflecting on equity in services inspired our team to create a resource to help Canadian newcomers navigate their new home. Newt aims to reduce one of the most stressful aspects of the immigrant experience by promoting more equitable access to services. ## What it does We believe that everyone deserves equal access to health, financial, legal, and other services. Newt displays ratings on how well businesses can accommodate a user's first language, allowing newcomers to make more informed choices based on their needs. When searching for a particular services, we use a map to display several options and their ratings for the user's first language. Users can then contact businesses by writing a message in their language of choice. Newt automatically translates the message and sends a text to the business provider containing the original and translated message as well as the user's contact information and preferred language of correspondence. ## How we built it Frontend: React, Typescript Backend: Python, Flask, PostgreSQL, Infobip API, Yelp API, Google Translate, Docker ## Challenges we ran into Representing location data within our relational database was challenging. It would not be feasible to store every possible location that users might search for within the database. We needed to find a balance between sourcing data from the Yelp API and updating the database using the results without creating unnecessary duplicates. ## What we learned We learned to display location data through an interactive map. To do so, we learned about react-leaflet to embed maps on React webpages. In the backend, we learned to use Infobip by reviewing related documentation, experimenting with test data, and with the help of Hack Western's sponsors. Lastly, we challenged ourselves to write unit tests for our backend functions and integrate testing within GitHub Actions to ensure every code contribution was safe. ## What's next for Newts * Further support for translating the frontend display in each user's first language. * Expanding backend data sources beyond the Yelp API and including other data sources more specific to user queries
## Inspiration As a student at university, after first year it is very difficult to find all the information you need to find off campus living, we wanted to solve this problem to help students find the information that they need. There is no central website or collection of information that exists that can aid students with this problem, and thats where we come in! ## What it does Using a constantly updated database, we track what listings are available near the university for prospective housing options in a certain area. The results are then displayed in a meaningful and simple fashion providing the user with all the information required to make an informed decision, such as: relative location of housing to points of interest (i.e. University, Restaurants, Gyms, etc.), comparing houses by price and size, and providing a price average to give the user a point of reference when looking at house prices. ## How we built it Using various platforms and different languages, we built our website with many different moving parts. One collects data about available housing from the major landlords of the area and stores it into the data base. The second part takes the data from the database and intereprets it in a meaningful manner. That information is then taken and displayed in a sleek and elegant website which is accessible to the end user. ## Challenges we ran into Collecting data from websites despite varying HTML/Source Codes, CSS, using our data and applying it best for the consumer, interacting various API's, and gluing it all together. ## Accomplishments that we're proud of We are proud that we manage to collect 50 property listings which would effectively provide many students with ample choices in where to live, easing the process for them. We are also proud of how well worked together especially since most of our team members come from different Universities and had only met at the hackathon. ## What we learned We learned integral software design skills which we incorporated in our projects design. We also learned about different types of API's specifically the Google API and how to interact with them. ## What's next for FindLiving.Space We are going to scale it to incorporate data about listing in more cities to assist students, from other universities, facing similar difficulties finding affordable housings. We would also like to offer features which would benefit the landlords as well, giving them an estimate on their property's value based on the, hopefully, thousands of property listings on the site. We want to create the ideal solution the problem we are trying to solve.
partial
## Inspiration For a manager of a small business -- be it a store, restaurant, gym, or even a movie theater -- improving the customer experience and understand what's going on is tremendously important. Having access to analytics of when people are entering the building, what areas they're spending time at, and what crowds and lines are forming can provide managers with incredibly useful insights -- from identifying parts of the building layout that are poorly designed and causing congestion, to figuring out that certain table setups or shop items are particularly engaging, to having a better idea of what's going on in their business and being able to make data-driven decisions about how to improve. ## What it does Given a live video feed from an overhead camera, Crowd Insights’ AI algorithms detect human heads within the video and use this positional data to identify lines and clusters of people and create heatmaps. The small business owner can then examine this data to learn about human traffic flow within their store over a specified period of time. There are a variety of use cases for this data: congestion tracking, popular hotspots in store, long lines, etc. By analyzing these trends over time, small business owners can make informed decisions on how to improve their business to optimize the physical interaction of customers with the store. For example, if they notice that lots of people tend to group up around a certain product, then they can know to place that product near the back of the store to prevent crowding around the store entrance. Other use cases for this technology could include event management. Event organizers such as the TreeHacks team can use this technology to monitor the congestion within each room and help disperse people from highly crowded rooms to open spaces for work. They can monitor lines, ie for food or networking, and figure out novel ways to deal with long lines and heavy foot traffic. ## How we built it We built the theory and data science toolkits, machine learning model, frontend, and backend separately. For the machine learning, we used the Pytorch FCHD fully convolutional head detector, running on a Google Cloud VM. Afterwards, we passed the list of heads to the graph theory library that we built, which constructed the Minimum Spanning Tree through the graph, removed edges that were too long, and performed elliptical fits to determine whether a group of points was a line or a cluster. We also aggregated human location data over time to create a heatmap of the environment to see which places are interacted with the most. Firebase is used to communicate between the head detector and the computer (like a Raspberry Pi), which sends webcam feed data. Finally we have a web server using ReactJS that displays the results. ## Challenges we ran into One main issue was finding a vision model that could provide dense data for human position in a camera frame. Most models tend to do decent at closer distances but as we try to monitor areas that are >15 feet away from a camera, the precision becomes an issue. Due to the fact that we needed this sort of density in our data, we had to work through testing many model architectures and fusion techniques to yield the best results. We also had a lot of trouble rendering the line/cluster data from Firebase in a real-time graph on the website. This was tough because no member had extensive experience with realtime updating and with push/pull requests between Firebase and the web app. To solve this, we worked together to break the problem down into two parts—that of collecting and parsing data from Firebase, and that of displaying the data in a dynamic graph. Lastly, this was our first time incorporating a big chunk of frontend programming in our application. Our experience in JavaScript, HTML, and Firebase was limited. Thus, it took us a long time to implement the syntax of the languages from scratch. However, this also made this project really impactful as it provided us with an exceptional learning opportunity. ## Accomplishments that we’re proud of We implemented simple but effective algorithms for recognizing clusters of crowds and lines. We used minimum spanning trees and fitting ellipses to identify clusters, then took clusters with particularly elongated ellipses and fit them with best fit lines. We developed a decision tree that applied knowledge from all branches of computer science - from theory to machine learning and software engineering - together in a product that became more than the sum of its parts. The final web product took tens of hours to complete, and we’re confident that we were able to get it right. ## What we learned A lot of new frontend learning and creating algorithms ReactJS, ChartJS, CanvasJS, Plotly, firebase ML Head and Body Detection Algorithms Kruskal’s Minimum Spanning Tree, Automatic K-Means Clustering, Depth-First Search, Firebase - Realtime graphs, how to upload data from Jetson to Firebase to web app Even though the project was divided into a frontend and backend portion, all members were able to understand the implementation on both sides. Throughout the implementation, we worked as a unified team, especially when we ran into roadblocks. The core takeaway from this project is our improved understanding of realtime databases, machine learning models, and frontend program structure. ## What's next for Crowd Insights AI One big next step would be applying mapping techniques to create a 3D map of the shop, then localize detected crowds in that 3D map. It would allow the business owner to analyze exactly which shelves or tables are becoming crowded. Furthermore, performing spatial transforms on the angled camera footage would allow us to track 3D from a 2D space. We'd also want to apply optical flow and motion tracking to see how people are moving through the space and what slows them down.
## Inspiration We got lost so many times inside MIT... And no one could help us :( No Google Maps, no Apple Maps, NO ONE. Since now, we always dreamed about the idea of a more precise navigation platform working inside buildings. And here it is. But that's not all: as traffic GPS usually do, we also want to avoid the big crowds that sometimes stand in corridors. ## What it does Using just the pdf of the floor plans, it builds a digital map and creates the data structures needed to find the shortest path between two points, considering walls, stairs and even elevators. Moreover, using fictional crowd data, it avoids big crowds so that it is safer and faster to walk inside buildings. ## How we built it Using k-means, we created nodes and clustered them using the elbow diminishing returns optimization. We obtained the hallways centers combining scikit-learn and filtering them applying k-means. Finally, we created the edges between nodes, simulated crowd hotspots and calculated the shortest path accordingly. Each wifi hotspot takes into account the number of devices connected to the internet to estimate the number of nearby people. This information allows us to weight some paths and penalize those with large nearby crowds. A path can be searched on a website powered by Flask, where the corresponding result is shown. ## Challenges we ran into At first, we didn't know which was the best approach to convert a pdf map to useful data. The maps we worked with are taken from the MIT intranet and we are not allowed to share them, so our web app cannot be published as it uses those maps... Furthermore, we had limited experience with Machine Learning and Computer Vision algorithms. ## Accomplishments that we're proud of We're proud of having developed a useful application that can be employed by many people and can be extended automatically to any building thanks to our map recognition algorithms. Also, using real data from sensors (wifi hotspots or any other similar devices) to detect crowds and penalize nearby paths. ## What we learned We learned more about Python, Flask, Computer Vision algorithms and Machine Learning. Also about frienship :) ## What's next for SmartPaths The next steps would be honing the Machine Learning part and using real data from sensors.
## Inspiration Initially, we struggled to find a project idea. After circling through dozens of ideas and the occasional hacker's block, we were still faced with a huge ***blank space***. In the midst of all our confusion, it hit us that this feeling of desperation and anguish is familiar to all thinkers and creators. There came our inspiration - the search for inspiration. Tailor is a tool that enables artists to overcome their mental blocks in a fun and engaging manner, while leveraging AI technology. AI is very powerful, but finding the right prompt can sometimes be tricky, especially for children or those with special needs. With our easy to use app, anyone can find inspiration as swiftly as possible. ## What it does The site generates prompts artists to generate creative prompts for DALLE. By clicking the "add" button, a react component containing a random noun is added to the main container. Users can then specify the color and size of this noun. They can add as many nouns as they want, then specify the style and location of the final artwork. After hitting submit, a prompt is generated and sent to OpenAI's API, which returns an image. ## How we built it It was built using React Remix, OpenAI's API, and a random noun generator API. TailwindCSS was used for styling which made it easy to create beautiful components. ## Challenges we ran into Getting tailwind installed, and installing dependencies in general. Sometimes our API wouldn't connect, and OpenAI rotated our keys since we were developing together. Even with tailwind, it was sometimes hard for the CSS to do what we wanted it to. Passing around functions and state between parent and child components in react was also difficult. We tried to integrate twiliio with an API call but it wouldn't work, so we had to setup a separate backend on Vercel and manually paste the image link and phone number. Also, we learned Remix can't use react-speech libraries so that was annoying. ## Accomplishments that we're proud of * Great UI/UX! * Connecting to the OpenAI Dalle API * Coming up with a cool domain name * Sleeping more than 2 hours this weekend ## What we learned We weren't really familiar with React as none of use had really used it before this hackathon. We really wanted to up our frontend skills and selected Remix, a metaframework based on React, to do multipage routing. It turned out to be a little overkill, but we learned a lot and are thankful to mentors. They showed us how to avoid overuse of Hooks, troubleshoot API connection problems, and use asynchrous functions. We also learned many more tailwindcss classes and how to use gradients. ## What's next for Tailor It would be cool to have this website as a browser extension, maybe just to make it more accessible, or even to have it scrape websites for AI prompts. Also, it would be nice to implement speech to text, maybe through AssemblyAI
winning
## Inspiration During the AWS chatbot workshop, we came up with the idea of using voice recognition to solve a problem. Randomly, we came up with the idea to create a more consistent and cost efficient solution for taking restaurant orders and offering an easy-to-use platform for restaurant workers to keep track of their orders. ## What it does SpeakBites uses cutting-edge speech recognition technology to accurately parse and process customer orders in real-time. Once an order is placed, it's instantly displayed on a user-friendly dashboard for kitchen staff, ensuring swift and precise preparation. ## How we built it We built this application using React as our frontend framework and Python Flask as our backend to handle server requests. We used Auth0 for authentication and OpenAI API for order processing. ## Challenges we ran into Our first initial struggle was coming up with a way turn a transcript into clean and consistent JSON to be used for the restaurant staff's order dashboard. We realized that OpenAI's Chat GPT API could handle this easily if it was given the right context. Without the right context and clear instructions on what to output, this API was returning inconsistent JSON that would completely break the application. ## Accomplishments that we're proud of We are proud that we were able to complete a functioning application based off of our initial idea. From a whiteboard to code in less than 24 hours, we were able to implement the base functionality and present a working application. ## What we learned Throughout the past 36 hours, we learned that simplicity is key. During our initial white boarding session, it was easy to go down rabbit holes and come up with complex yet useful features. There was one problem though, there was just simply not enough time to implement these features. We took the time to focus on the user, and how they would be interacting with the platform. What features were the most pertinent? How can we deliver them as simply as possible? What must we have, and what would be nice to have? Finding clear answers to these questions early on was important in leading us to a final working product. ## What's next for SpeakBite After this hackathon, we are going to fix major bugs. The app is not perfect and can use a lot of work. As our goal for the past 36 hours was to simply create a working prototype, perfecting the application was something we could not afford to do.
## Inspiration Physiotherapy is an important part of rehabilitation for physical injuries. A physiotherapist helps restore a patient’s mobility and prevents future injuries. However, physiotherapy can be too expensive and time-consuming to repeatedly attend sessions at the clinic. Also, because of the pandemic, many Canadians were not able to regularly attend physiotherapy sessions because of limited service. Enter Smart Heal! A low-cost, portable, and easy to use personal device that acts as your own virtual physiotherapist. ## What it does Smart Heal gives you instructions and personally guides you through physiotherapy exercises in a fun and intuitive manner from the comfort of your home. The system consists of a Raspberry Pi and an IMU Sensor which is used to track the movement and orientation of any injured body part. Smart Heal also comes with a desktop app which I made to render a user’s current wrist orientation in a 3D virtual environment in real time. The exercises are gamified to enhance the user experience. For example, I provide virtual targets that a user could try to hit and by doing that they end up moving their wrist to a proper exercise position. With your wrist, you can pretend you are controlling an airplane through different maneuvers that actually help exercise your wrist. All while being fun for all ages! ## How we built it The Raspberry Pi is the central hub which is responsible for processing all sensor data. The Pi interfaces with the IMU and runs a Kalman Filter. The Kalman Filter, which I specifically tuned, makes sure that the raw IMU data is as smooth as can be. With the filter turned on, jerky hand movements are stabilized in the 3D visualizer. The desktop visualizer app is built to display the IMU orientation feedback in the virtual environment. Once the desktop app is initialized, the app will automatically seek the Smart Heal device (Raspberry Pi) on the home wifi network and automatically connect with the IMU data stream. This communication backend is handled by ROS (Robot Operating System). Once the IMU data stream is received, the app will run through each required step of a specific wrist exercise (update text instructions and colours for the targets).
## Inspiration Everyone gets tired waiting for their large downloads to complete. BitTorrent is awesome, but you may not have a bunch of peers ready to seed it. Fastify, a download accelerator as a service, solves both these problems and regularly enables 4x download speeds. ## What it does The service accepts a URL and spits out a `.torrent` file. This `.torrent` file allows you to tap into Fastify's speedy seed servers for your download. We even cache some downloads so popular downloads will be able to be pulled from Fastify even speedier! Without any cache hits, we saw the following improvements in download speeds with our test files: ``` | | 512Mb | 1Gb | 2Gb | 5Gb | |-------------------|----------|--------|---------|---------| | Regular Download | 3 mins | 7 mins | 13 mins | 30 mins | | Fastify | 1.5 mins | 3 mins | 5 mins | 9 mins | |-------------------|----------|--------|---------|---------| | Effective Speedup | 2x | 2.33x | 2.6x | 3.3x | ``` *test was performed with slices of the ubuntu 16.04 iso file, on the eduroam network* ## How we built it Created an AWS cluster and began writing Go code to accept requests and the front-end to send them. Over time we added more workers to the AWS cluster and improved the front-end. Also, we generously received some well-needed Vitamin Water. ## Challenges we ran into The BitTorrent protocol and architecture was more complicated for seeding than we thought. We were able to create `.torrent` files that enabled downloads on some BitTorrent clients but not others. Also, our "buddy" (*\*cough\** James *\*cough\**) ditched our team, so we were down to only 2 people off the bat. ## Accomplishments that we're proud of We're able to accelerate large downloads by 2-5 times as fast as the regular download. That's only with a cluster of 4 computers. ## What we learned Bittorrent is tricky. James can't be trusted. ## What's next for Fastify More servers on the cluster. Demo soon too.
losing
**Inspiration** iSho aims to spark interest in the beauty of both the interconnectivity of global financial markets and in the field of data visualization. Using the Kensho API and the D3.js library, we wanted to visualize and transform multifaceted data about these markets by displaying connections within the same industry and connections across multiple industries.. **What it does** In a force graph, companies are represented as nodes which linked by industry, by size, or by customer-client or supplier-consumer relationship. These relationships are clearly visible in the graph, and each industry is highlighted to emphasize the cross-industry relationships. The user interacts with the interface by mousing over and clicking on the various entities of the graph, which allows the user to view the number of connections for each entity. The user can also drag the entities in order to view the connections better. **How we built it** We implemented a Kensho client from which we extracted information about equity relationships by calling the REST API. These requests were handled through the Kensho API Client using Python, stored as JSON objects, and visualized through D3.js as an interactive graph. **Challenges we ran into** The API was throttled heavily in the middle of the night, which limited the amount of work we were able to do. Learning JavaScript was also new for some of us. **Accomplishments that we're proud of** We created a polished JavaScript application and visualized the entire S&P 500. **What we learned** In addition to honing our JavaScript and front-end web development skills, we also learned that perseverance is key. **What's next for iSho** Extracting more information from the Kensho database, such as timelines related to each of the entities, will allow us to visualize other branches of Kesho’s knowledge graphs. We will extrapolate from these visualizations in order to make predictions about financial markets and to gain insights into relationships that are otherwise difficult to observe.
## Inspiration Like most people, our team has found it difficult to adapt to e-learning during the pandemic. Among difficulties often faced by students in e-learning is dealing with the large quantity of asynchronous lecture content. By providing the ability to download sped-up copies and transcriptions of online lecture content, we hoped to alleviate students’ e-learning difficulties. ## What it does The app allows upload of a range of video formats. Users can download captions to convenient .txt files (eliminating the need to scroll through long videos to find where the prof mentioned what cellular structure is known as the powerhouse of the cell). Users can also download a copy of the video that is sped up to a comfortable level to help them achieve more efficient review sessions. ## How we built it We used Node.js, Express, React, and Python’s SpeechRecognition and MoviePy libraries to build a web application to address the issues we identified in e-learning. ## Challenges we ran into Since our backend requires Node.js to interface with Python, we found it difficult to manage the interface of these two languages. We settled on Node.js’s `child_process` API to resolve these issues. ## Accomplishments that we're proud of We’re proud of the application’s streamlined design, providing users quick access to its main functionality quickly. ## What we learned We learned about Python Speech Recognition APIs and handling of file upload/download APIs. ## What's next for Transcription App The next steps are enabling file sharing between users.
# Welcome to TrashCam 🚮🌍♻️ ## Where the Confusion of Trash Sorting Disappears for Good ### The Problem 🌎 * ❓ Millions of people struggle with knowing how to properly dispose of their trash. Should it go in compost, recycling, or garbage? * 🗑️ Misplaced waste is a major contributor to environmental pollution and the growing landfill crisis. * 🌐 Local recycling rules are confusing and inconsistent, making proper waste management a challenge for many. ### Our Solution 🌟 TrashCam simplifies waste sorting through real-time object recognition, turning trash disposal into a fun, interactive experience. * 🗑️ Instant Sorting: With TrashCam, you never have to guess. Just scan your item, and our app will tell you where it belongs—compost, recycling, or garbage. * 🌱 Gamified Impact: TrashCam turns eco-friendly habits into a game, encouraging users to reduce their waste through challenges and a leaderboard. * 🌍 Eco-Friendly: By helping users properly sort their trash, TrashCam reduces contamination in recycling and compost streams, helping protect the environment. ### Experience It All 🎮 * 📸 Snap and Sort: Take a picture of your trash and TrashCam will instantly categorize it using advanced object recognition. * 🧠 AI-Powered Classification: After detecting objects with Cloud Vision and COCO-SSD, we pass them to Gemini, which accurately classifies the items, ensuring they’re sorted into the correct waste category. * 🏆 Challenge Friends: Compete on leaderboards to see who can make the biggest positive impact on the environment. * ♻️ Learn as You Play: Discover more about what can be recycled, composted, or thrown away with each interaction. ### Tech Stack 🛠️ * ⚛️ Next.js & TypeScript: Powering our high-performance web application for smooth, efficient user experiences. * 🛢️ PostgreSQL & Prisma: Storing and managing user data securely, ensuring fast and reliable access to information. * 🌐 Cloud Vision API & COCO-SSD: Using state-of-the-art object recognition to accurately identify and classify waste in real time. * 🤖 Gemini AI: Ensuring accurate classification of waste objects to guide users in proper disposal practices. ### Join the Movement 🌿 TrashCam isn’t just about proper waste management—it’s a movement toward a cleaner, greener future. * 🌍 Make a Difference: Every time you sort your trash correctly, you help reduce landfill waste and protect the planet. * 🎯 Engage and Compete: By playing TrashCam, you're not just making eco-friendly choices—you're inspiring others to do the same. * 🏆 Be a Waste Warrior: Track your progress, climb the leaderboard, and become a leader in sustainable living.
losing
## Inspiration Many people around the world do not know what health insurance is and how it can benefit them. For example, during the COVID-19 pandemic, people in India did not have access to health insurance which led to many deaths. It also caused many families to have debt due to expensive medical bills. Our hope is to educate people about health insurance and provide greater accessibility towards those resources so everyone can lead happier and healthier lives. ## What it does A web application allows users to interact and learn about health insurance as well as various health topics. The personalized account feature lets users view and edit their health data. Users are also able to search for hospitals nearby. These resources allow users to get the information they need all in one convenient place. ## How we built it We implemented a MERN stack for our web application. The front-end integrated React, Node. js, and Express to create a functional UI. The back-end uses MongoDB for database management and Google API Keys for the hospital mapping feature of our website. Javascript and HTML/CSS were also used to stylize the UI. ## Challenges we ran into One of the biggest challenges was connecting the backend through a server and querying the data to be used within the application. There was also going to be a 24-hour AI Chatbot that could answer questions but due to time constraints it was not fully completed. There were challenges getting API Keys from Google which is why the backend does not have all structural features. ## Accomplishments that we're proud of Even though there were obstacles, everyone in the team learned something new. The team’s spirit and the willingness to persevere helped everyone contribute towards the project. We are proud of the Figma prototype showcasing our idea. We are also happy to have built a web application from a basic understanding of web development. Also, being able to collaborate through Git was a huge achievement for the entire team. ## What we learned Everyone learned how to create a full stack website using a MERN stack. This allowed us all to learn the key differences between frontend and backend development. Some of the team members learned the basics of large language models and how they could be used to train an AI Chatbot. The beginners on the team also learned how to code in Javascript to create a web application. ## What's next for Assured Health The main goal is to create a more elegant user interface and have the backend fully functional. We hope to build a website that is useful and accessible to all as we continue on our mission to educate people about health insurance.
## Inspiration An important aspect of health is being able to efficiently interact with healthcare professionals in order to get much needed help. Our group aimed to create an all-in-one health tracking app that would fulfill three main use cases. Firstly, it is common for people to misremember the details of symptoms they may have been experiencing when talking to their doctor. If somebody has fallen ill and wishes to track their symptom progression over time, when they go to the doctor, they can have easy access to real-time health stats describing their symptoms. This would ensure efficient and accurate interactions with somebody's doctor. A second use case we wanted to satisfy was the case that the user who owns the account is a caregiver for someone else (ex. a child, an elderly person). If someone's child is sick, it would be helpful to track their symptoms so that when they go to the doctor, they can accurately describe the progression of their child's symptoms. We wanted to enable this functionality through the use of multiple health "Profiles" that are connected to one account. A third use case we wanted to satisfy was the case that the user would like to simply track their health over time. ## What it does The figma satisfies all of our use cases. Our implementation satisfies the third use case. ## How we built it One of our group-mates made the Figma. The Figma is very exhaustive and includes essentially all of the functionality we intended on implementing. Another one of our group-mates worked on building out as much of the Figma design as possible using React and Tailwind. Two of our group-mates worked on building out the back-end using Node.js with an Express server. ## Challenges we ran into * we ran into a lot of issues with our back-end dev and dependency issues, this reduced the functionality ## Accomplishments that we're proud of We are proud of how nice our front-end and figma look, it gives a really good insight into how we wanted all of the functionality to look! Even though we weren't able to satisfy all of the functionality we wanted to, we are also proud of the fact that we were able to connect the front-end to a back-end express server. ## What we learned * On the back-end, we learned a lot about different database storage techniques and also how to set up endpoints on the server-side of the code base. * We also learned how to prototype on figma using components. * We also learned how to handle event clickers ## What's next for Harmony Health
## Inspiration Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med! ## What it does This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data. ## How we built it We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project. ## Challenges we ran into We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left. ## Accomplishments that we're proud of We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting. ## What we learned We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha! ## What's next for EZ-Med The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
losing
## Inspiration Not wanting to keep moving my stuff around all the time while moving between SF and Waterloo, Canada. ## What it does It will call a Postmate to pick up your items, which will then be delivered to our secure storage facility. The Postmate will be issued a one time use code for the lock to our facility, and they will store the item. When the user wants their item back, they will simply request it and it will be there in minutes. ## How I built it The stack is Node+Express and the app is on Android. It is hosted on Azure. We used the Postmate and Here API ## Challenges I ran into ## Accomplishments that I'm proud of A really sleek and well built app! The API is super clean, and the Android interface is sexy ## What I learned ## What's next for Stockpile Better integrations with IoT devices and better item management.
## Inspiration We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere! ## What it does Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site. ## How we built it We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches. ## Challenges we ran into ## Accomplishments that we're proud of Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time. ## What we learned ## What's next for Runway
## Inspiration We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students ## What it does The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid. ## How we built it React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons ## Challenges we ran into React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native ## What we learned New exposure APIs and gained experience on linking tools together ## What's next for Scrappy.io Improvements to the web scraper, potentially expanding beyond restaurants.
partial
## Inspiration We wanted to protect our laptops with the power of rubber bands. ## What it does It shoots rubber bands at aggressive screen lookers. ## How we built it Willpower and bad code. ## Challenges we ran into Ourselves. ## Accomplishments that we're proud of Having something. Honestly. ## What we learned Never use continuous servos. ## What's next for Rubber Security IPO
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
OUR VIDEO IS IN THE COMMENTS!! THANKS FOR UNDERSTANDING (WIFI ISSUES) ## Inspiration As a group of four students having completed 4 months of online school, going into our second internship and our first fully remote internship, we were all nervous about how our internships would transition to remote work. When reminiscing about pain points that we faced in the transition to an online work term this past March, the one pain point that we all agreed on was a lack of connectivity and loneliness. Trying to work alone in one's bedroom after experiencing life in the office where colleagues were a shoulder's tap away for questions about work, and the noise of keyboards clacking and people zoned into their work is extremely challenging and demotivating, which decreases happiness and energy, and thus productivity (which decrease energy and so on...). Having a mentor and steady communication with our teams is something that we all valued immensely during our first co-ops. In addition, some of our works had designated exercise times, or even pre-planned one-on-one activities, such as manager-coop lunches, or walk breaks with company walking groups. These activities and rituals bring structure into a sometimes mundane day which allows the brain to recharge and return back to work fresh and motivated. Upon the transition to working from home, we've all found that somedays we'd work through lunch without even realizing it and some days we would be endlessly scrolling through Reddit as there would be no one there to check in on us and make sure that we were not blocked. Our once much-too-familiar workday structure seemed to completely disintegrate when there was no one there to introduce structure, hold us accountable and gently enforce proper, suggested breaks into it. We took these gestures for granted in person, but now they seemed like a luxury- almost impossible to attain. After doing research, we noticed that we were not alone: A 2019 Buffer survey asked users to rank their biggest struggles working remotely. Unplugging after work and loneliness were the most common (22% and 19% respectively) <https://buffer.com/state-of-remote-work-2019> We set out to create an application that would allow us to facilitate that same type of connection between colleagues and make remote work a little less lonely and socially isolated. We were also inspired by our own online term recently, finding that we had been inspired and motivated when we were made accountable by our friends through usage of tools like shared Google Calendars and Notion workspaces. As one of the challenges we'd like to enter for the hackathon, the 'RBC: Most Innovative Solution' in the area of helping address a pain point associated with working remotely in an innovative way truly encaptured the issue we were trying to solve perfectly. Therefore, we decided to develop aibo, a centralized application which helps those working remotely stay connected, accountable, and maintain relationships with their co-workers all of which improve a worker's mental health (which in turn has a direct positive affect on their productivity). ## What it does Aibo, meaning "buddy" in Japanese, is a suite of features focused on increasing the productivity and mental wellness of employees. We focused on features that allowed genuine connections in the workplace and helped to motivate employees. First and foremost, Aibo uses a matching algorithm to match compatible employees together focusing on career goals, interests, roles, and time spent at the company following the completion of a quick survey. These matchings occur multiple times over a customized timeframe selected by the company's host (likely the People Operations Team), to ensure that employees receive a wide range of experiences in this process. Once you have been matched with a partner, you are assigned weekly meet-ups with your are partner to build that connection. Using Aibo, you can video call with your partner and start creating a To-Do list with your partner and by developing this list together, you can bond over the common tasks to perform despite potentially having seemingly very different roles. Partners would have 2 meetings a day, once in the morning where they would go over to-do lists and goals for the day, and once in the evening in order to track progress over the course of that day and tasks that need to be transferred over to the following day. ## How We built it This application was built with React, Javascript and HTML/CSS on the front-end along with Node.js and Express on the back-end. We used the Twilio chat room API along with Autocode to store our server endpoints and enable a Slack bot notification that POSTs a message in your specific buddy Slack channel when your buddy joins the video calling room. In total, we used **4 APIs/ tools** for our project. * Twilio chat room API * Autocode API * Slack API for the Slack bots * Microsoft Azure to work on the machine learning algorithm When we were creating our buddy app, we wanted to find an effective way to match partners together. After looking over a variety of algorithms, we decided on the K-means clustering algorithm. This algorithm is simple in its ability to group similar data points together and discover underlying patterns. The K-means will look for a set amount of clusters within the data set. This was my first time working with machine learning but luckily, through Microsoft Azure, I was able to create a working training and interference pipeline. The dataset marked the user’s role and preferences and created n/2 amount of clusters where n are the number of people searching for a match. This API was then deployed and tested on web server. Although, we weren't able to actively test this API on incoming data from the back-end, this is something that we are looking forward to implementing in the future. Working with ML was mainly trial and error, as we have to experiment with a variety of algorithm to find the optimal one for our purposes. Upon working with Azure for a couple of hours, we decided to pivot towards leveraging another clustering algorithm in order to group employees together based on their answers to the form they fill out when they first sign up on the aido website. We looked into the PuLP, a python LP modeler, and then looked into hierarchical clustering. This seemed similar to our initial approach with Azure, and after looking into the advantages of this algorithm over others for our purpose, we decided to chose this one for the clustering of the form responders. Some pros of hierarchical clustering include: 1. Do not need to specify the number of clusters required for the algorithm- the algorithm determines this for us which is useful as this automates the sorting through data to find similarities in the answers. 2. Hierarchical clustering was quite easy to implement as well in a Spyder notebook. 3. the dendrogram produced was very intuitive and helped me understand the data in a holistic way The type of hierarchical clustering used was agglomerative clustering, or AGNES. It's known as a bottom-up algorithm as it starts from a singleton cluster then pairs of clusters are successively merged until all clusters have been merged into one big cluster containing all objects. In order to decide which clusters had to be combined and which ones had to be divided, we need methods for measuring the similarity between objects. I used Euclidean distance to calculate this (dis)similarity information. This project was designed solely using Figma, with the illustrations and product itself designed on Figma. These designs required hours of deliberation and research to determine the customer requirements and engineering specifications, to develop a product that is accessible and could be used by people in all industries. In terms of determining which features we wanted to include in the web application, we carefully read through the requirements for each of the challenges we wanted to compete within and decided to create an application that satisfied all of these requirements. After presenting our original idea to a mentor at RBC, we had learned more about remote work at RBC and having not yet completed an online internship, we learned about the pain points and problems being faced by online workers such as: 1. Isolation 2. Lack of feedback From there, we were able to select the features to integrate including: Task Tracker, Video Chat, Dashboard, and Matching Algorithm which will be explained in further detail later in this post. Technical implementation for AutoCode: Using Autocode, we were able to easily and successfully link popular APIs like Slack and Twilio to ensure the productivity and functionality of our app. The Autocode source code is linked before: Autocode source code here: <https://autocode.com/src/mathurahravigulan/remotework/> **Creating the slackbot** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) if (context.params.StatusCallbackEvent === 'room-created') { await lib.slack.channels['@0.7.2'].messages.create({ channel: `#buddychannel`, text: `Hey! Your buddy started a meeting! Hop on in: https://aibo.netlify.app/ and enter the room code MathurahxAyla` }); } // do something let result = {}; // **THIS IS A STAGED FILE** // It was created as part of your onboarding experience. // It can be closed and the project you're working on // can be returned to safely - or you can play with it! result.message = `Welcome to Autocode! 😊`; return result; }; ``` **Connecting Twilio to autocode** ``` const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN}); const twilio = require('twilio'); const AccessToken = twilio.jwt.AccessToken; const { VideoGrant } = AccessToken; const generateToken =() => { return new AccessToken( process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_API_KEY, process.env.TWILIO_API_SECRET ); }; const videoToken = (identity, room) => { let videoGrant; if (typeof room !== 'undefined') { videoGrant = new VideoGrant({ room }); } else { videoGrant = new VideoGrant(); } const token = generateToken(); token.addGrant(videoGrant); token.identity = identity; return token; }; /** * An HTTP endpoint that acts as a webhook for HTTP(S) request event * @returns {object} result Your return value */ module.exports = async (context) => { console.log(context.params) const identity = context.params.identity; const room = context.params.room; const token = videoToken(identity, room); return { token: token.toJwt() } }; ``` From the product design perspective, it is possible to explain certain design choices: <https://www.figma.com/file/aycIKXUfI0CvJAwQY2akLC/Hack-the-6ix-Project?node-id=42%3A1> 1. As shown in the prototype, the user has full independence to move through the designs as one would in a typical website and this supports the non sequential flow of the upper navigation bar as each feature does not need to be viewed in a specific order. 2. As Slack is a common productivity tool in remote work and we're participating in the Autocode Challenge, we chose to use Slack as an alerting feature as sending text messages to phone could be expensive and potentially distract the user and break their workflow which is why Slack has been integrated throughout the site. 3. The to-do list that is shared between the pairing has been designed in a simple and dynamic way that allows both users to work together (building a relationship) to create a list of common tasks, and duplicate this same list to their individual workspace to add tasks that could not be shared with the other (such as confidential information within the company) In terms of the overall design decisions, I made an effort to create each illustration from hand simply using Figma and the trackpad on my laptop! Potentially a non-optimal way of doing so, but this allowed us to be very creative in our designs and bring that individuality and innovation to the designs. The website itself relies on consistency in terms of colours, layouts, buttons, and more - and by developing these components to be used throughout the site, we've developed a modern and coherent website. ## Challenges We ran into Some challenges that we ran into were: * Using data science and machine learning for the very first time ever! We were definitely overwhelmed by the different types of algorithms out there but we were able to persevere with it and create something amazing. * React was difficult for most of us to use at the beginning as only one of our team members had experience with it. But by the end of this, we all felt like we were a little more confident with this tech stack and front-end development. + Lack of time - there were a ton of features that we were interested in (like user authentication and a Google calendar implementation) but for the sake of time we had to abandon those functions and focus on the more pressing ones that were integral to our vision for this hack. These, however, are features I hope that we can complete in the future. We learned how to successfully scope a project and deliver upon the technical implementation. ## Accomplishments that We're proud of * Created a fully functional end-to-end full stack application incorporating both the front-end and back-end to enable to do lists and the interactive video chat that can happen between the two participants. I'm glad I discovered Autocode which made this process simpler (shoutout to Jacob Lee - mentor from Autocode for the guidance) * Solving an important problem that affects an extremely large amount of individuals- according to tnvestmentexecutive.com: StatsCan reported that five million workers shifted to home working arrangements in late March. Alongside the 1.8-million employees who already work from home, the combined home-bound employee population represents 39.1% of workers. <https://www.investmentexecutive.com/news/research-and-markets/statscan-reports-numbers-on-working-from-home/> * From doing user research we learned that people can feel isolated when working from home and miss the social interaction and accountability of a desk buddy. We're solving two problems in one, tackling social problems and increasing worker mental health while also increasing productivity as their buddy will keep them accountable! * Creating a working matching algorithm for the first time in a time crunch and learning more about Microsoft Azure's capabilities in Machine Learning * Creating all of our icons/illustrations from scratch using Figma! ## What We learned * How to create and trigger Slack bots from React * How to have a live video chat on a web application using Twilio and React hooks * How to use a hierarchical clustering algorithm (agglomerative clustering algorithms) to create matches based on inputted criteria * How to work remotely in a virtual hackathon, and what tools would help us work remotely! ## What's next for aibo * We're looking to improve on our pairing algorithm. I learned that 36 hours is not enough time to create a new Tinder algorithm and that other time these pairing can be improved and perfected. * We're looking to code more screens and add user authentication to the mix, and integrate more test cases in the designs rather than using Figma prototyping to prompt the user. * It is important to consider the security of the data as well, and that not all teams can discuss tasks at length due to specificity. That is why we encourage users to create a simple to do list with their partner during their meeting, and use their best judgement to make it vague. In the future, we hope to incorporate machine learning algorithms to take in the data from the user knowing whether their project is NDA or not, and if so, as the user types it can provide warnings for sensitive information. * Add a dashboard! As can be seen in the designs, we'd like to integrate a dashboard per user that pulls data from different components of the website such as your match information and progress on your task tracker/to-do lists. This feature could be highly effective to optimize productivity as the user simply has to click on one page and they'll be provided a high level explanation of these two details. * Create our own Slackbot to deliver individualized Kudos to a co-worker, and pull this data onto a Kudos board on the website so all employees can see how their coworkers are being recognized for their hard work which can act as a motivator to all employees.
winning
This code allows the user to take photos of animals, and the app determines whether the photos are pleasing enough for people to see the cuteness of the animals.
## Inspiration We took inspiration from the multitude of apps that help to connect those who are missing to those who are searching for their loved ones and others affected by natural disaster, especially flooding. We wanted to design a product that not only helped to locate those individuals, but also to rescue those in danger. Through the combination of these services, the process of recovering after natural disasters is streamlined and much more efficient than other solutions. ## What it does Spotted uses a drone to capture and send real-time images of flooded areas. Spotted then extracts human shapes from these images and maps the location of each individual onto a map and assigns each victim a volunteer to cover everyone in need of help. Volunteers can see the location of victims in real time through the mobile or web app and are provided with the best routes for the recovery effort. ## How we built it The backbone of both our mobile and web applications is HERE.com’s intelligent mapping API. The two APIs that we used were the Interactive Maps API to provide a forward-facing client for volunteers to get an understanding of how an area is affected by flood and the Routing API to connect volunteers to those in need in the most efficient route possible. We also used machine learning and image recognition to identify victims and where they are in relation to the drone. The app was written in java, and the mobile site was written with html, js, and css. ## Challenges we ran into All of us had a little experience with web development, so we had to learn a lot because we wanted to implement a web app that was similar to the mobile app. ## Accomplishments that we're proud of Accomplishment: We are most proud that our app can collect and stores data that is available for flood research and provide real-time assignment to volunteers in order to ensure everyone is covered in the shortest time ## What we learned We learned a great deal about integrating different technologies including XCode, . We also learned a lot about web development and the intertwining of different languages and technologies like html, css, and javascript. ## What's next for Spotted Future of Spotted: We think the future of Spotted is going to be bright! Certainly, it is tremendously helpful for the users, and at the same time, the program improves its own functionality as data available increases. We might implement a machine learning feature to better utilize the data and predict the situation in target areas. What's more, we believe the accuracy of this prediction function will grow exponentially as data size increases. Another important feature is that we will be developing optimization algorithms to provide a real-time most efficient solution for the volunteers. Other future development might be its involvement with specific charity groups and research groups and work on specific locations outside US.
## Inspiration As certified pet owners, we understand that our pets are part of our family. Our companions deserve the best care and attention available. We envisioned the goal of simplifying and modernizing pet care, all in one convenient and elegant app. Our idea began with our longing for our pets. Being international students ourselves, we constantly miss our pets and wish to see them. We figured that a pet livestream would allow us to check in on our pals, and from then on pet boarding, and our many other services arose. ## What it does Playground offers readily available and affordable pet care. Our services include: Pet Grooming: Flexible grooming services with pet-friendly products. Pet Walking: Real-time tracking of your pet as it is being walked by our friendly staff. Pet Boarding: In-home and out-of-home live-streamed pet-sitting services. Pet Pals™: Adoption services, meet-and-greet areas, and pet-friendly events. PetU™: Top-tier training with positive reinforcement for all pets Pet Protect™: Life insurance and lost pet recovery Pet Supplies: Premium quality, chew-resistant, toys and training aids. ## How we built it First, we carefully created our logo using Adobe Illustrator. After scratching several designs, we settled on a final product. Then, we designed the interface using Figma. We took our time to ensure an appealing design. We programmed the front end with Flutter in Visual Studio Code. Finally, we used the OpenAI API GPT3.5 to implement a personalized chat system to suggest potential services and products to our users. ## Challenges we ran into We ran into several bugs when making the app, and used our debugging skills from previous CS work and eventually solved them. One of the bugs involved an image overflow when coding the front end. Our icons were too large for the containers we used, but with adjustments to the dimensions of the containers and some cropping, we managed to solve this issue. ## Accomplishments that we're proud of We’re proud of our resilience when encountering bugs on Flutter. Despite not being as experienced with this language as we are with others, we were able to identify and solve the problems we faced. Furthermore, we’re proud of our effort to make our logo because we are not the best artists, but our time spent on the design paid off. We feel that our logo reflects the quality and niche of our app. ## What we learned We learned that programming the front end is as important if not more important than the back end of an app. Without a user-friendly interface, no app could function seeing as customer retention would be minimal. Our approachable interface allows users of all levels of digital literacy to get the best care for their beloved pets. ## What's next for Playground After polishing the app, we plan on launching it in Peru and gathering as much feedback as we can. Then we plan on working on implementing our users’ suggestions and fixing any issues that arise. After finishing the new and improved version of Playground, we plan on launching internationally and bringing the best care for all our pets. More importantly, 10% of our earnings will go to animal rescue organizations!
winning
## Inspiration My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals. ## What it does Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches. ## How we built it The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category. ## Challenges we ran into Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea. ## Accomplishments that we're proud of Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building. ## What we learned Coding might not take that long, but making it fully work takes just as much time. ## What's next for Resonance Finish building the model and possibly trying to incubate it.
## Inspiration We were interested in machine learning and data analytics and decided to pursue a real-world application that could prove to have practical use for society. Many themes of this project were inspired by hip-hop artist Cardi B. ## What it does Money Moves analyzes data about financial advisors and their attributes and uses machine's deep learning unsupervised algorithms to predict if certain financial advisors will most likely be beneficial or detrimental to an investor's financial standing. ## How we built it We partially created a custom deep-learning library where we built a Self Organizing Map. The Self Organizing Map is a neural network that takes data and creates a layer of abstraction; essentially reducing the dimensionality of the data. To make this happened we had to parse several datasets. We used beautiful soup library, pandas and numpy to parse the data needed. Once it was parsed, we were able to pre-process the data, to feed it to our neural network (Self Organizing Map). After we were able to successfully analyze the data with the deep learning algorithm, we uploaded the neural network and dataset to our Google server where we are hosting a Django website. The website will show investors the best possible advisor within their region. ## Challenges we ran into Due to the nature of this project, we struggled with moving large amounts of data through the internet, cloud computing, and designing a website to display analyzed data because of the difficult with WiFi connectivity that many hackers faced at this competition. We mostly overcame this through working late nights and lots of frustration. We also struggled to find an optimal data structure for storing both raw and output data. We ended up using .csv files organized in a logical manner so that data is easier accessible through a simple parser. ## Accomplishments that we're proud of Successfully parse the dataset needed to do preprocessing and analysis with deeplearing. Being able to analyze our data with the Self Organizing Map neural network. Side Note: Our team member Mikhail Sorokin placed 3rd in the Yhack Rap Battle ## What we learned We learnt how to implement a Self Organizing Map, build a good file system and code base with Django. This led us to learn about Google's cloud service where we host our Django based website. In order to be able to analyze the data, we had to parse several files and format the data that we had to send through the network. ## What's next for Money Moves We are looking to expand our Self Organizing Map to accept data from other financial dataset, other than stock advisors; this way we are able to have different models that will work together. One way we were thinking is to have unsupervised and supervised deep-learning systems where, we have the unsupervised find the patterns that would be challenging to find; and the supervised algorithm will direct the algorithm to a certain goal that could help investors choose the best decision possible for their financial options.
## Inspiration GET RICH QUICK! ## What it does Do you want to get rich? Are you tired of the man holding you down? Then, WeakLink.Ai is for you! Our app comes equipped with predictive software to suggest the most beneficial stocks to buy based off your preferences. Simply said, a personal stock broker in your pocket. ## How we built it Weaklink.Ai front end is built using the Dash framework for python. The partnered transactions are preformed with the assistance of Standard Library where our back end calculation engine uses modern machine learning techniques to make decisions about the best time to buy or sell a specific stock. Confirmation is sent to the user's mobile device via Twilio. Upon confirmation the workflow will execute the buy or sell transaction. The back end engine was custom built in python by one of our engineers. ## Challenges we ran into It was difficult to scrape the web for precise data in a timely and financially efficient fashion. It was very challenging to integrate Blockstack into a full python environment. The front end design was reformatted several times. There was some learning curves adjusting to never before seen or used api. Finding financially efficient solutions to some api ## Accomplishments that we're proud of Despite the various challenges we are proud of our project. The front was more visually appealing than anticipated. The transition from back end calculations to visual inspection was relatively seamless. This was our first time working with each other and we had very good synergy, we were able to divide up the work and support one another along the way each taking part in touching each aspect of the project. ## What we learned The various api available as well as some of their limitations. We discovered that open source api is often more helpful than a closed source black box. We also learned a lot about data security via Blockstack. Lastly we learned about various ways to interpret and analyze stocks in a quantitative fashion. ## What's next for WeakLink.Ai There is a lot of work left for us. The most immediate priority would be to set up trend analysis based on historical data of the user followed by more customization options. A place for the user to describe their desires and our machine learning algorithm will take that information into account in order to recommend actions of the user which is in their best interest.
winning
## Inspiration We are fitness lovers ## What it does The premise of our app is that users can set daily fitness goals that last for however long they want. They could be long term goals that last for a year or short term goals that last for weeks or even days. For example, a user can commit to walking at least 5000 steps per day for the next year or doing finishing at least 400 pushups for the coming month. When a user sets a goal, he will also commit a certain amount of money to a deposit with Incentivate. Then, by completing these self imposed goals the money the users can earn back the money that they’ve earned. By using Incentivate daily and building a up habit of reaching their goals, users who excel can even earn extra cash from a community leaderboard. This feature give users the ability to make collaborative wellness goals with others. Friends, family or online communities can come together to make fitness goals and deposit money into a pool. Money will be distributed at the end of goal period to the top performing members of a leaderboard. ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for Incentivate fitness We would like to finish our machine vision and pedometer use to track progress and improve UI
## Inspiration Always wanted to build a hack for a social good, which is why I wanted to come to DeltaHacks! I think this project in particular is really nice because it encourages people to get out of their comfort zone, meet new people, all while living a healthier lifestyle. More than just rewarding the participants, they get to bolster community events and bring happiness in a time where it may be hard to find. Hence, we chose to tackle mainly the Triangle Challenge of 'Exercise for the Community'. ## What it does The idea is really simple. Build a scalable web application that offers community-based portfolios of events. So that's what we did. People can sign up either as members or community administrators, whereby they can all make events (physical activities such as sports, yoga, aerobics, etc.). The more events someone goes to, the more points they aggregate that they can then redeem for vouchers --- or rather, we thought it would be better to have a community award of some sort. Such as the most points gets a recognition for being the person way out there, trying to get the community and the youth engaged. ## How we built it DeltaFit (repo name - FitCommunity) is built with Python 3.7.1 on a Django 2.7.1 MVC architecture, with the backend database built with PostgreSQL 11.1! The front end is all custom HTML/CSS/Javascript. ## Challenges we ran into A critical challenge was incorporating FitBit SDK into our idea. Our original plan was to consider using the SDK to track each individual's and the overall community's health trends, and provide reasonable analytics for better event planning/personal encouragement. However, the FitBit we had was not compatible with web applications so we had to move forward from there. From there however, we figured out a way to use the CSV files that FitBit allows its user to download, and combine it with the power of PlotLy to generate very beautiful and accurate analytics on pillars of health such as sleep, steps, and calories burnt. All it required was a bit more user responsibility, and clever use of HTML to get the graphs in the personal accounts of our users. However, due to the late start, we were not able to merge completely before the submission of this devpost, but please feel free to check out the only other branch on our repo, where you can see what our prospects were :) ## Accomplishments that we're proud of As mentioned above, we are really proud of the PlotLy integration despite a faulty FitBit, and we are also proud of developing a relatively robust Django platform is just about 21 hours. Also proud that we managed to put aside our tiry eyes, and keep on hacking for social change. It's been a really fun time, and we are really glad to have participated ## What we learned No matter how long 24 hours seems, it really is not that long | Communication with your team is very important, and so is proper delegation of tasks | No matter how bleak a bug may seem, there's always a way out of it or around it! ## What's next for FitCommunity Hopefully, we can eventually find a way for seamless and proper FitBit integration. We do want to take off as much of the user-side responsibility as possible to ensure that hassles are kept at a minimum. We also want to open source this software and potentially exhibit this idea to other communities around the world who too can benefit from a healthy dose of socializing... and sweating ;) ! Thank you for your time, and for your consideration! Best, Gaurav Karna Abijith Mani Rushil Malik Maariz Almamun :)
## Inspiration The inspiration for the application was simple, the truth is many people especially young people simply don’t find saving money for their emergency fund that interesting. Incrementum aims to change that by adding a gamification element to saving money into an emergency fund so users are not only reminded but incentivized to invest in financial security. ## What it does Incrementum is a web application that encourages people to support their financial security by gamifying investing money into an emergency fund! The idea of Incrementum is simple, users when signing up for the web application will be able to set a target goal for their emergency fund and also how much money they have already contributed to their emergency fund. The Incrementum application will then provide the user with weekly tasks to ensure that they are looking into different types of savings accounts and investing regularly into their emergency funds. Additionally, Incrementum gives a virtual dashboard with all the data a user would need to track and review their progress in building their financial security. Here is where the gamification element comes in, whenever the user completes the task they will be awarded their own pixelated plant to place in their own virtual money garden! This garden can be shared with friends so multiple users can contribute to the same garden all with their unique plants. In order to further incentivize users to complete their tasks each plant generated for the user will always be completely unique and never repeated. How is this possible? In addition to building the web application of Incrementum, our team has also created a machine learning and deep learning generative adversarial network (GAN). This GAN has been trained on hundreds of images of pixelated trees and plants and through machine learning is able to output unique images of unique, never-before-seen pixelated plants for the user's virtual garden! This allows all users to have a completely unique and original money garden all fitted with never-before-seen pixelated plants generated from our machine learning model. This will incentive users to keep following and accomplishing their weekly tasks as a way to keep collecting more plants for their garden and in turn support building their financial security and their emergency funds in a safe and enjoyable way! ## How we built it In order to build the web application side of Incrementum we used React and Bootstrap in the front end and created a Python Flask REST API as the backend. React was used due to the useful features such as react-router and hooks while Flask was used in order to ensure that the application was lightweight. When developing the machine learning model/GAN for Incrementum Python, PyTorch, Numpy and Scikit-learn were used to create the model which used multiple different layers in a neural network in order to generate the unique and never-before-seen plants. After this, the model was deployed to another flask backend REST API which the React front end calls for the plants while the previously mentioned flask REST API is used by the front end to store user information and financial progress. ## Challenges we ran into The biggest challenges we ran into were simply learning all the machine learning tools, frameworks and topics quickly and effectively. Only one member of our team was exposed to machine learning before the hackathon and he had never built such a model as complex as the one need for Incrementum. Therefore it was a challenge for our team to all work together and understand complex topics such as neural networks and how a GAN is created. Furthermore, learning how to use sci-kit-learn and NumPy proved to be a tough challenge that our team persevered through. Learning such topics in a short amount of time also proved to be a very rewarding experience however as our team learned how to delegate and prototype quickly. ## Accomplishments that we're proud of The biggest accomplishment our team is proud of is developing a very complex and effective machine learning and deep learning GAN model that is able to create a unique, never before seen and high-quality pixelated plant every single iteration. The training of the model alone took seven hours therefore it was a major accomplishment for the team when the model was working so effectively. Additionally, being able to design and create an application that allows for the gamification for creating an emergency fund was another major accomplishment for our team as well. ## What we learned Working on such a technically complex product such as Incrementum really showed our team what we were capable of when working together. Many of us were not exposed to the technologies and topics used with Incrementum however being able to not only create full-stack web application with a complete React front end and Flask backend but also creating a GAN one of the most complex neural network types in machine learning allowed our team to learn so much about software engineering, planning and teamwork. Specifically, our team gained a newfound competence in developing complex machine learning models and developing an eye-catching and user-friendly front end. ## What's next for Incrementum The goal of Incrementum moving forward is to further develop the application to handle more investing and saving goals. We would like to add tasks that support teaching and incentivizing students and young people to invest in various securities such as stocks and bonds and research more into saving accounts such as retirement accounts. Using the gamification model Incrementum uses we are certain we can make some of the less interesting elements of building wealth and financial security much more engaging and enjoyable for all people.
losing
## Inspiration In today's fast-paced world, managing finances effectively has become crucial for achieving financial security and independence. We were inspired to create SpendWisely after realizing how overwhelming it can be for individuals to track expenses, manage savings, and stay on top of their financial goals in one place. Our aim is to provide a user-friendly, customizable dashboard that not only displays essential financial information but also helps users make informed decisions by offering financial news and data in a simple, yet powerful interface. ## What it does SpendWisely is a personal finance dashboard that helps users manage their budgets, track expenses, set savings goals, and monitor their overall financial health. ## Key features: * Account Balance Overview: Displays the current status of checking and savings accounts. * Expense Breakdown: Engaging data visualizations that categorize user expenses like groceries, rent, utilities, entertainment, and more. * Financial Wellness Report: Judges a user's transaction history based on our statistical algorithm and provides a score and a letter grade. * Financial News: Provides up-to-date, scrollable news cards displaying the latest trends and developments in finance. * Recent Transactions: Displays a quick list of recent financial transactions. * Budget Overview: Monitors user budgets and expenses, offering a quick snapshot of how much is spent vs. the allocated budget. * Savings Goal Tracker: Allows users to set and track their savings goals, showing progress with a dynamic progress bar. ## How we built it We built SpendWisely using a combination of frontend technologies: * Defang: Deployed project to cloud using Defang for seamless DevOps integration * PropelAuth: Powered authorization and enhanced data privacy * HTML/CSS: Structured and styled the user interface to make it clean and intuitive. * Bootstrap: Used Bootstrap components to create a responsive layout with a modern, polished look, including the carousel for financial news and cards for budget/goal tracking. * JavaScript: Implemented interactivity, including fetching and displaying financial news from the NewsAPI. * Chart.js: Used to visualize user data through dynamic charts (line and pie charts) for expenses and budgeting. * d3: Created Sankey diagram to visualize spending and user's financial flow. * NewsAPI: Integrated the API to provide users with relevant and updated financial news. ## Challenges we ran into We encountered several challenges along the way: * UI Proportions: Ensuring the charts, cards, and other elements were proportionate and visually balanced on different screen sizes was tricky. The pie chart and line chart needed resizing and positioning to ensure they were both visually appealing and functional. * News Carousel: Initially, the carousel displayed only one news card at a time. We had to modify the structure to display 4 cards at once and allow users to scroll between them while maintaining a responsive design. * API Integration: Fetching and correctly displaying dynamic news data from the NewsAPI required troubleshooting, particularly when ensuring images and text aligned properly in the news cards. ## Accomplishments that we're proud of We are proud of creating an interactive, visually appealing financial dashboard that is highly functional and user-centric. Key accomplishments include: * Successfully integrating dynamic news content via NewsAPI. * Building a fully responsive design that adapts seamlessly to different screen sizes. * Implementing visual tools like charts to enhance user understanding of their finances. * Providing users with a simplified yet robust way to track savings goals and expenses. ## What we learned Throughout this project, we learned: * The importance of responsive design and how to use CSS and Bootstrap efficiently to maintain consistency across devices. * How to integrate third-party APIs (like NewsAPI) to provide real-time data and enhance the user experience. * How to work with Chart.js to dynamically visualize data and give users a clearer view of their spending habits and financial goals. * The value of user-centered design in financial applications, ensuring both functionality and ease of use. ## What's next for SpendWisely We have many exciting ideas for the future of SpendWisely: * User Authentication: Implement user accounts to allow for personalized dashboards and data persistence. * Advanced Analytics: Add features like trend analysis and spending predictions based on historical data. * Mobile App: Expand SpendWisely into a mobile app to allow users to manage their finances on the go. * More Integrations: Integrate with banking APIs to automatically update transactions and balances in real-time. * Goal Recommendation System: Suggest personalized financial goals based on user habits and spending history. We’re excited to continue developing SpendWisely into a comprehensive financial management tool.
## Inspiration At this stage of our lives, a lot of students haven’t yet developed the financial discipline to save money and tend to be wasteful with their spending. With this app, we hope to design an interface that focuses on minimalism. The app is easy to use and provides users with a visual breakdown of where their money is going to and from. This gives users a better idea of what their day-to-day spending habits look like and help them develop the necessary money saving skills that would be beneficial in the future. ## What it does BreadBook enables users to input their expenses and income and categorize them chronologically from daily, monthly, weekly, to yearly perspectives. BreadBook also helps you visualize these finances across different time periods and assists you in budgeting properly throughout them. ## How we built it This project was built using a simple web stack of Angular, Node.js and various Node libraries and packages. The back-end of the server is a simple REST api running on a Node.js express server that handles requests and allows the transmitting of data to the front-end. Our front-end was built using Angular and a few vfx packages such as chart.js. ## Accomplishments that we're proud of Being able to implement various libraries of Angular and Node greatly helped us better understand our weaknesses and strengths as team members, and expanded our knowledge greatly regarding these technologies. Implementing chart.js to graphically show our data was a huge achievement given our limited experience with Angular modules. ## What we learned Throughout the two day development process of our application, we all gained experience in using angular and what it allowed us to do in the creation of our web application. As a result, we all definitely became more comfortable with this framework, along with web development overall. Our team decided to focus on the app functionalities right off the bat, as we all saw the potential and usefulness in our project idea and believed it should be our primary focus in the app’s development. As things progressed, we began to implement a cleaner UI and presentation aspect of the app as well, which was an entirely different realm of development. As a result, we all developed a better understanding of what to prioritize in the process of development as time is limited, as well as the importance in deciding whether or not to implement certain ideas based on their effort, required work and value to the project. Finally one of the greatest parts about our participation in this event and being part of this project is the collaboration aspect. We can definitely all say we had an amazing experience from simply getting together, being creative and working in a group. This is especially different to us, as during this event, we created this project not as a school requirement, but through our own interests. It is when we work on projects like this that we are reminded of why we enjoy programming and the process of developing our ideas into something we can all use. ## What's next for BreadBook The current state of BreadBook tracks all the day-to-day and recurring purchases that the user has made throughout daily, monthly or annual time periods. In the future, we would like to implement ways to identify or cut out unneeded speeding. We would give estimates on how much money could be saved daily/monthly/annually if this spending was reduced. We would also like to add a monthly spending plan that would allow you to allocate different amounts of money for different spending categories. When the spending limit of one or more of these categories is being approached a warning would be given to the user to ensure that they realize that they are near their limit.
## Inspiration We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday. ## What it does Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest. ## How we built it Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard. ## Challenges we ran into Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder. ## Accomplishments that we're proud of Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use. ## What we learned Lots of things about Augmented Reality, graphics and Android mobile app development. ## What's next for ARnance Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
losing
## Inspiration Our project was conceived out of the need to streamline the outdated process of updating IoT devices. Historically, this was an unmanageable task that required manual connections to each device, a daunting challenge when deploying at scale. Moreover, it often incurs significant expenses, particularly when relying on services like Amazon IoT Core. ## What it does UpdateMy.Tech addresses these issues head-on. Gone are the days of laborious manual updates and the high Amazon costs. With our system, you can effortlessly upload your firmware and your IoT device will flash itself with zero downtime. ## How we built it To realize this vision, we used Taipy for the front and back end, MongoDB Atlas for storing binary images, Google Cloud to host the VM and API, Flask to communicate with Mongo and IoT devices, ESP-IDF to build firmware for the ESP32, all to enable OTA (Over-The-Air): The underlying technology that enables automatic updates, eliminating the need for manual intervention. ## Accomplishments that we're proud of Our proudest achievement is creating a user-friendly and cost-effective IoT device management system that liberates users from the complexities and expenses of previous methods, including reliance on costly services like Amazon IoT Core. Technology should be accessible to all, and this project embodies that ethos. ## What we learned This project served as a profound learning experience for our team. We gained insights into IoT device management, cloud technologies, and user-centric interface design. Additionally, we honed our expertise in firmware development, skills that will prove invaluable in future endeavours. ## What's next for UpdateMy.Tech Our goal is to reduce the barrier of entry for hardware hacks by simplifying IoT device management. The future of "UpdateMy.Tech" is to enhance and broaden our solution's capabilities. This includes adding more features on top of updates and extending compatibility to a broader array of IoT devices.
## Inspiration The worst part about getting up in the morning is making coffee while you are half-asleep and wishing you were still in bed. This made me ask, why can't I have coffee already waiting for me right when I get out of bed? So i developed an idea, why don't I modify a coffee maker to have its own Twitter account. That way you can tweet the coffee maker from your bed. This way you will have hot coffee ready as soon you get up! ## What it does The Electric Bean Coffee is a **IoT coffee maker** that connects to WiFi and has its own Twitter account. Whenever tweeted at to make coffee it begins brewing some coffee right away. ## How I built it To build this project we used several hardware and software components. To monitor the tweets we used a **Stdlib Twitter API** which is accessed using **Python**. As for hardware we used a **Raspberry Pi** which runs our python code and controls an **Arduino**. We used a **DigiKey** servo to control the modded coffee machine ## Challenges I ran into We were all new to using **Arduino**. This proved challenging because we had to overcome many challenging getting software to play well with the hardware ## Accomplishments that I'm proud of Being able to complete our first attempt at a hardware hack and venture into the expanding world of **IoT** ## What I learned A lot about hardware and and extracting JSON objects in python using **Stdlib** ## What's next for Electric Beans -- IoT Twitter Coffee Machine Creating a more visually appeal product
## Inspiration Toronto is famous because it is tied for the second longest average commute time of any city (96 minutes, both ways). People love to complain about the TTC and many people have legitimate reasons for avoiding public transit. With our app, we hope to change this. Our aim is to change the public's perspective of transit in Toronto by creating a more engaging and connected experience. ## What it does We built an iOS app that transforms the subway experience. We display important information to subway riders, such as ETA, current/next station, as well as information about events and points of interest in Toronto. In addition, we allow people to connect by participating in a local chat and multiplayer games. We have small web servers running on ESP8266 micro-controllers that will be implemented in TTC subway cars. These micro-controllers create a LAN (Local Area Network) Intranet and allow commuters to connect with each other on the local network using our app. The ESP8266 micro-controllers also connect to the internet when available and can send data to Microsoft Azure. ## How we built it The front end of our app is built using Swift for iOS devices, however, all devices can connect to the network and an Android app is planned for the future. The live chat section was built with JavaScript. The back end is built using C++ on the ESP8266 micro-controller, while a Python script handles the interactions with Azure. The ESP8266 micro-controller runs in both access point (AP) and station (STA) modes, and is fitted with a button that can push data to Azure. ## Challenges we ran into Getting the WebView to render properly on the iOS app was tricky. There was a good amount of tinkering with configuration due to the page being served over http on a local area network (LAN). Our ESP8266 Micro-controller is a very nifty device, but such a low cost device comes with strict development rules. The RAM and flash size were puny and special care was needed to be taken to ensure a stable foundation. This meant only being able to use vanilla JS (no Jquery, too big) and keeping code as optimized as possible. We built the live chat room with XHR and Ajax, as opposed to using a websocket, which is more ideal. ## Accomplishments that we're proud of We are proud of our UI design. We think that our app looks pretty dope! We're also happy of being able to integrate many different features into our project. We had to learn about communication between many different tech layers. We managed to design a live chat room that can handle multiple users at once and run it on a micro-controller with 80KiB of RAM. All the code on the micro-controller was designed to be as lightweight as possible, as we only had 500KB in total flash storage. ## What we learned We learned how to code as lightly as possible with the tight restrictions of the chip. We also learned how to start and deploy on Azure, as well as how to interface between our micro-controller and the cloud. ## What's next for Commutr There is a lot of additional functionality that we can add, things like: Presto integration, geolocation, and an emergency alert system. In order to host and serve larger images, the ESP8266' measly 500KB of storage is planning on being upgraded with an SD card module that can increase storage into the gigabytes. Using this, we can plan to bring fully fledged WiFi connectivity to Toronto's underground railway.
partial
## Inspiration More than **2 million** people in the United States are affected by diseases such as ALS, brain or spinal cord injuries, cerebral palsy, muscular dystrophy, multiple sclerosis, and numerous other diseases that impair muscle control. Many of these people are confined to their wheelchairs, some may be lucky enough to be able to control their movement using a joystick. However, there are still many who cannot use a joystick, eye tracking systems, or head movement-based systems. Therefore, a brain-controlled wheelchair can solve this issue and provide freedom of movement for individuals with physical disabilities. ## What it does BrainChair is a neurally controlled headpiece that can control the movement of a motorized wheelchair. There is no using the attached joystick, just simply think of the wheelchair movement and the wheelchair does the rest! ## How we built it The brain-controlled wheelchair allows the user to control a wheelchair solely using an OpenBCI headset. The headset is an Electroencephalography (EEG) device that allows us to read brain signal data that comes from neurons firing in our brain. When we think of specific movements we would like to do, those specific neurons in our brain will fire. We can collect this EEG data through the Brainflow API in Python, which easily allows us to stream, filter, preprocess the data, and then finally pass it into a classifier. The control signal from the classifier is sent through WiFi to a Raspberry Pi which controls the movement of the wheelchair. In our case, since we didn’t have a motorized wheelchair on hand, we used an RC car as a replacement. We simply hacked together some transistors onto the remote which connects to the Raspberry Pi. ## Challenges we ran into * Obtaining clean data for training the neural net took some time. We needed to apply signal processing methods to obtain the data * Finding the RC car was difficult since most stores didn’t have it and were closed. Since the RC car was cheap, its components had to be adapted in order to place hardware pieces. * Working remotely made designing and working together challenging. Each group member worked on independent sections. ## Accomplishments that we're proud of The most rewarding aspect of the software is that all the components front the OpenBCI headset to the raspberry-pi were effectively communicating with each other ## What we learned One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective disciplines (computer science, mechatronics engineering, mechanical engineering, and electrical engineering). ## What's next for Brainchair To improve BrainChair in future iterations we would like to: Optimize the circuitry to use low power so that the battery lasts months instead of hours. We aim to make the OpenBCI headset not visible by camouflaging it under hair or clothing.
## Inspiration Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers. ## What it does We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy. ## How we built it Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus. ## Challenges we ran into The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift. ## Accomplishments that we're proud of We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others! ## What we learned This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity. ## What's next for Alternative Physical Treatment We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis. ## Business Model Canvas <https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO>
## Inspiration Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer. ## What it does We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert. ## How we built it OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found. ## Challenges we ran into We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well! ## Accomplishments that we're proud of Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected. ## What we learned Without the proper environment, your code is useless. ## What's next for EyeSee Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people. ## Links Feel free to read more about visual impairment, and how to help; <https://w3c.github.io/low-vision-a11y-tf/requirements.html>
winning
## Inspiration More creators are coming online to create entertaining content for fans across the globe. On platforms like Twitch and YouTube, creators have amassed billions of dollars in revenue thanks to loyal fans who return to be part of the experiences they create. Most of these experiences feel transactional, however: Twitch creators mostly generate revenue from donations, subscriptions, and currency like "bits," where Twitch often takes a hefty 50% of the revenue from the transaction. Creators need something new in their toolkit. Fans want to feel like they're part of something. ## Purpose Moments enables creators to instantly turn on livestreams that can be captured as NFTs for live fans at any moment, powered by livepeer's decentralized video infrastructure network. > > "That's a moment." > > > During a stream, there often comes a time when fans want to save a "clip" and share it on social media for others to see. When such a moment happens, the creator can press a button and all fans will receive a non-fungible token in their wallet as proof that they were there for it, stamped with their viewer number during the stream. Fans can rewatch video clips of their saved moments in their Inventory page. ## Description Moments is a decentralized streaming service that allows streamers to save and share their greatest moments with their fans as NFTs. Using Livepeer's decentralized streaming platform, anyone can become a creator. After fans connect their wallet to watch streams, creators can mass send their viewers tokens of appreciation in the form of NFTs (a short highlight clip from the stream, a unique badge etc.) Viewers can then build their collection of NFTs through their inventory. Many streamers and content creators have short viral moments that get shared amongst their fanbase. With Moments, a bond is formed with the issuance of exclusive NFTs to the viewers that supported creators at their milestones. An integrated chat offers many emotes for viewers to interact with as well.
## Inspiration The project was inspired by looking at the challenges that artists face when dealing with traditional record labels and distributors. Artists often have to give up ownership of their music, lose creative control, and receive only a small fraction of the revenue generated from streams. Record labels and intermediaries take the bulk of the earnings, leaving the artists with limited financial security. Being a music producer and a DJ myself, I really wanted to make a product with potential to shake up this entire industry for the better. The music artists spend a lot of time creating high quality music and they deserve to be paid for it much more than they are right now. ## What it does Blockify lets artists harness the power of smart contracts by attaching them to their music while uploading it and automating the process of royalty payments which is currently a very time consuming process. Our primary goal is to remove the record labels and distributors from the industry since they they take a majority of the revenue which the artists generate from their streams for the hard work which they do. By using a decentralized network to manage royalties and payments, there won't be any disputes regarding missed or delayed payments, and artists will have a clear understanding of how much money they are making from their streams since they will be dealing with the streaming services directly. This would allow artists to have full ownership over their work and receive a fair compensation from streams which is currently far from the reality. ## How we built it BlockChain: We used the Sui blockchain for its scalability and low transaction costs. Smart contracts were written in Move, the programming language of Sui, to automate royalty distribution. Spotify API: We integrated Spotify's API to track streams in real time and trigger royalty payments. Wallet Integration: Sui wallets were integrated to enable direct payments to artists, with real-time updates on royalties as songs are streamed. Frontend: A user-friendly web interface was built using React to allow artists to connect their wallets, and track their earnings. The frontend interacts with the smart contracts via the Sui SDK. ## Challenges we ran into The most difficult challenge we faced was the Smart Contract Development using the Move language. Unlike commonly known language like Ethereum, Move is relatively new and specifically designed to handle asset management. Another challenge was trying to connect the smart wallets in the application and transferring money to the artist whenever a song was streamed, but thankfully the mentors from the Sui team were really helpful and guided us in the right path. ## Accomplishments that we're proud of This was our first time working with blockchain, and me and my teammate were really proud of what we were able to achieve over the two days. We worked on creating smart contracts and even though getting started was the hardest part but we were able to complete it and learnt some great stuff along the way. My teammate had previously worked with React but I had zero experience with JavaScript, since I mostly work with other languages, but we did the entire project in Node and React and I was able to learn a lot of the concepts in such a less time which I am very proud of myself for. ## What we learned We learned a lot about Blockchain technology and how can we use it to apply it to the real-world problems. One of the most significant lessons we learned was how smart contracts can be used to automate complex processes like royalty payments. We saw how blockchain provides an immutable and auditable record of every transaction, ensuring that every stream, payment, and contract interaction is permanently recorded and visible to all parties involved. Learning more and more about this technology everyday just makes me realize how much potential it holds and is certainly one aspect of technology which would rule the future. It is already being used in so many aspects of life and we are still discovering the surface. ## What's next for Blockify We plan to add more features, such as NFTs for exclusive content or fan engagement, allowing artists to create new revenue streams beyond streaming. There have been some real-life examples of artists selling NFT's to their fans and earning millions from it, so we would like to tap into that industry as well. Our next step would be to collaborate with other streaming services like Apple Music and eliminate record labels to the best of our abilities.
## Inspiration Productivity is hard to harness especially at hackathons with many distractions, but a trick we software developing students found to stay productive while studying was using the “Pomodoro Technique”. The laptop is our workstation and could be a source of distraction, so what better place to implement the Pomodoro Timer as a constant reminder? Since our primary audience is going to be aspiring young tech students, we chose to further incentivize them to focus until their “breaks” by rewarding them with a random custom-generated and minted NFT to their name every time they succeed. This unique inspiration provided an interesting way to solve a practical problem while learning about current blockchain technology and implementing it with modern web development tools. ## What it does An innovative modern “Pomodoro Timer” running on your browser enables users to sign in and link their MetaMask Crypto Account addresses. Such that they are incentivized to be successful with the running “Pomodoro Timer” because upon reaching “break times” undisrupted our website rewards the user with a random custom-generated and minted NFT to their name, every time they succeed. This “Ethereum Based NFT” can then be both viewed on “Open Sea” or on a dashboard of the website as they both store the user’s NFT collection. ## How we built it TimeToken's back-end is built with Django and Sqlite and for our frontend, we created a beautiful and modern platform using React and Tailwind, to give our users a dynamic webpage. A benefit of using React, is that it works smoothly with our Django back-end, making it easy for both our front-end and back-end teams to work together ## Challenges we ran into We had set up the website originally as a MERN stack (MongoDB/Express.js/REACT/Node.js) however while trying to import dependencies for the Verbwire API, to mint our images into NFTs to the user’s wallets we ran into problems. After solving dependency issues a “git merge” produced many conflicts, and on the way to resolving conflicts, we discovered some difficult compatibility issues with the API SDK and JS option for our server. At this point we had to pivot our plan, so we decided to implement the Verbwire Python-provided API solution, and it worked out very well. We intended here to just pass the python script and its functions straight to our front-end but learned that direct front-end to Python back-end communication is very challenging. It involved Ajax/XML file formatting and solutions heavily lacking in documentation, so we were forced to keep searching for a solution. We realized that we needed an effective way to make back-end Python communicate with front-end JS with SQLite and discovered that the Django framework was the perfect suite. So we were forced to learn Serialization and the Django framework quickly in order to meet our needs. ## Accomplishments that we're proud of We have accomplished many things during the development of TimeToken that we are very proud of. One of our proudest moments was when we pulled an all-nighter to code and get everything just right. This experience helped us gain a deeper understanding of technologies such as Axios, Django, and React, which helped us to build a more efficient and user-friendly platform. We were able to implement the third-party VerbWire API, which was a great accomplishment, and we were able to understand it and use it effectively. We also had the opportunity to talk with VerbWire professionals to resolve bugs that we encountered, which allowed us to improve the overall user experience. Another proud accomplishment was being able to mint NFTs and understanding how crypto and blockchains work, this was a great opportunity to learn more about the technology. Finally, we were able to integrate crypto APIs, which allowed us to provide our users with a complete and seamless experience. ## What we learned When we first started working on the back-end, we decided to give MongoDB, Express, and NodeJS a try. At first, it all seemed to be going smoothly, but we soon hit a roadblock with some dependencies and configurations between a third-party API and NodeJS. We talked to our mentor and decided it would be best to switch gears and give the Django framework a try. We learned that it's always good to have some knowledge of different frameworks and languages, so you can pick the best one for the job. Even though we had a little setback with the back-end, and we were new to Django, we learned that it's important to keep pushing forward. ## What's next for TimeToken TimeToken has come a long way and we are excited about the future of our application. To ensure that our application continues to be successful, we are focusing on several key areas. Firstly, we recognize that storing NFT images locally is not scalable, so we are working to improve scalability. Secondly, we are making security a top priority and working to improve the security of wallets and crypto-related information to protect our users' data. To enhance user experience, we are also planning to implement a media hosting website, possibly using AWS, to host NFT images. To help users track the value of their NFTs, we are working on implementing an API earnings report with different time spans. Lastly, we are working on adding more unique images to our NFT collection to keep our users engaged and excited.
winning
## Inspiration We wanted to explore new technologies that had people who were comfortable with hardware experience software, and vice versa. We all explored new areas, whether it was software, hardware, or server architecture. We all learned a lot, and solved a great problem too! ## What it does Our project provides a touch screen interface with a doorbell, video camera, and intelligent lock. By using facial recognition we can determine who is at the door and act appropriately. We have the ability to unlock the door, as well as alert via multiple methods who is at the door. The Google Assistant is also able to give information as to recent door unlocks and visitors. ## How we built it The primary driver for the project is a Kotlin app running on the Android Things MX7D board. The board sends images to a Flask (Python) script hosted on a DigitalOcean droplet, which then communicates with both AWS Lambda facial recognition and Actions on Google and returns this information to the board. We also used some custom-cut components for our demo door. ## Challenges we ran into A lot of challenges involved the sheer number of technologies involved, mainly with connecting Android Things to Lambda to Actions on Google. Eventually we decided to use a Flask script to tie it all together. ## Accomplishments that we're proud of We're very proud of this whole thing, how much we learned, and getting something working on Android Things. The Android Things kit is such an incredible piece of hardware and software we'll definitely be working on again. ## What we learned We all learned a lot. Some of us learned more about hardware, while others learned more about software and backend development. We all explored and mainly worked in areas we weren't necessarily used to. ## What's next for A-door-able There's a lot of room for improvement and improved features. Adding some more notifications, more touch screen features and UI elements, along with more ways to get information about who is at your door!
## Inspiration We wanted to learn about machine learning. There are thousands of sliding doors made by Black & Decker and they're all capable of sending data about the door. With this much data, the natural thing to consider is a machine learning algorithm that can figure out ahead of time when a door is broken, and how it can be fixed. This way, we can use an app to send a technician a notification when a door is predicted to be broken. Since technicians are very expensive for large corporations, something like this can save a lot of time, and money that would otherwise be spent with the technician figuring out if a door is broken, and what's wrong with it. ## What it does DoorHero takes attributes (eg. motor speed) from sliding doors and determines if there is a problem with the door. If it detects a problem, DoorHero will suggest a fix for the problem. ## How we built it DoorHero uses a Tensorflow Classification Neural Network to determine fixes for doors. Since we didn't have actual sliding doors at the hackathon, we simulated data and fixes. For example, we'd assign high motor speed to one row of data, and label it as a door with a problem with the motor, or we'd assign normal attributes for a row of data and label it as a working door. The server is built using Flask and runs on [Floydhub](https://floydhub.com). It has a Tensorflow Neural Network that was trained with the simulated data. The data is simulated in an Android app. The app generates the mock data, then sends it to the server. The server evaluates the data based on what it was trained with, adds the new data to its logs and training data, then responds with the fix it has predicted. The android app takes the response, and displays it, along with the mock data it sent. In short, an Android app simulates the opening and closing of a door and generates mock data about the door, which is sends everytime the door "opens", to a server using a Flask REST API. The server has a trained Tensorflow Neural Network, which evaluates the data and responds with either "No Problems" if it finds the data to be normal, or a fix suggestion if it finds that the door has an issue with it. ## Challenges we ran into The hardest parts were: * Simulating data (with no background in sliding doors, the concept of sliding doors sending data was pretty abstract). * Learning how to use machine learning (turns out this isn't so easy) and implement tensorflow * Running tensorflow on a live server. ## Accomplishments that we're proud of ## What we learned * A lot about modern day sliding doors * The basics of machine learning with tensorflow * Discovered floydhub ## What we could have improve on There are several things we could've done (and wanted to do) but either didn't have time or didn't have enough data to. ie: * Instead of predicting a fix and returning it, the server can predict a set of potential fixes in order of likelihood, then send them to the technician who can look into each suggestion, and select the suggestion that worked. This way, the neural network could've learned a lot faster over time. (Currently, it adds the predicted fix to its training data, which would make for bad results * Instead of having a fixed set of door "problems" for the door, we could have built the app so that in the beginning, when the neural network hasn't learned yet, it asks the technician for input after everytime the fix the door (So it can learn without data we simulated as this is what would have to happen in the normal environment) * We could have made a much better interface for the app * We could have added support for a wider variety of doors (eg. different models of sliding doors) * We could have had a more secure (encrypted) data transfer method * We could have had a larger set of attributes for the door * We could have factored more into decisions (for example, detecting a problem if a door opens, but never closes).
## Inspiration With the increase in popularity of road trips, determining the most cost-efficient route is important. It usually requires multiples websites and resources to gather all the information needed to calculate the approximate costs of a trip. The gas mileage for the vehicle, distance, fastest route, and gas prices all have to be known. It is an entirely inefficient process and thus it deters many people from becoming economically aware of their transportation. We wanted to simplify this process and integrate it with something everybody already uses to plan their trips; Google Maps. ## What it does FuelPlan is a Chrome extension that allows the user to learn about the fuel economy of their vehicle, and use that information to plan trips on Google Maps. Once the user selects their vehicle, it is saved to their Google account. When they search for directions on Google Maps (with the extension enabled), there will seamlessly presented with the fuel cost for all of the possible routes of their trip. The gas price is automatically retrieved but can be adjusted by the user. Additionally they can switch between vehicles to compare their fuel economies/trip costs, and make more environmentally conscious decisions. ## How we built it To retrieve fuel economy data for the vehicles, we used an API by **FuelEconomy.gov**. We used **jQuery** in our Chrome extension to simplify the **AJAX** requests which we used to obtain the **JSON** data from the API. The becomes enabled when the user is on **Google Maps** and actively waits for the user to search for directions, then injects a new element to the route info. The form was designed using **HTML/CSS** to have a material, minimalist design. ## Challenges we ran into Throughout the Hackathon, there were various challenges we ran into especially in regards to brainstorming an idea. Thankfully, we were able to work around our challenges and successfully complete a project we are proud of. Prior to coming up with FuelPlan, we came up with many ideas that were either too complex or had already been done. We also encountered challenges building the extension. Making a chrome extension was new to both of us, so there was a bit of a learning curve. In addition, using Chrome's data storage was a challenge that we faced, as we did not want the user to re-enter values constantly. We were unsure of how to approach this in the beginning, however after researching we were able to implement this functionality in our chrome extension. ## Accomplishments that we're proud of We're definitely proud that we were able to successfully develop a project that provides relevant information to the user in a convenient manner. Our favorite part of the extension is how seamlessly it displays the information on Google Maps. ## What we learned Through our experience making this project, we learned how to work under a strict time constraint and create a relevant and applicable project. In addition, we learned how to work together with individuals at different skill levels and create an interdisciplinary application. Through this, we also gained exposure to chrome extensions, a relatively new concept for us. ## What's next for FuelPlan Going forth, we’d like to continue adding new features to FuelPlan. Under the time constraints, we were not able to use Google Maps' API to determine highway and city distances separately, but plan to implement this feature that use the vehicle's mileage for both highway and city, rather than using the combined gas mileage. This would yield a more accurate result. We also plan to make other minor improvements such as conversion between metric and imperial units. We hope to publish this extension to the Chrome app store in the near future!
partial
# Flash Computer Vision® ### Computer Vision for the World Github: <https://github.com/AidanAbd/MA-3> Try it Out: <http://flash-cv.com> ## Inspiration Over the last century, computers have gained superhuman capabilities in computer vision. Unfortunately, these capabilities are not yet empowering everyday people because building an image classifier is still a fairly complicated task. The easiest tools that currently exist still require a good amount of computing knowledge. A good example is [Google AutoML: Vision](https://cloud.google.com/vision/automl/docs/quickstart) which is regarded as the "simplest" solution and yet requires an extensive knowledge of web skills and some coding ability. We are determined to change that. We were inspired by talking to farmers in Mexico who wanted to identify ready / diseased crops easily without having to train many workers. Despite the technology existing, their walk of life had not lent them the opportunity to do so. We were also inspired by people in developing countries who want access to the frontier of technology but lack the education to unlock it. While we explored an aspect of computer vision, we are interested in giving individuals the power to use all sorts of ML technologies and believe similar frameworks could be set up for natural language processing as well. ## The product: Flash Computer Vision ### Easy to use Image Classification Builder - The Front-end Flash Magic is a website with an extremely simple interface. It has one functionality: it takes in a variable amount of image folders and gives back an image classification interface. Once the user uploads the image folders, he simply clicks the Magic Flash™ button. There is a short training process during which the website displays a progress bar. The website then returns an image classifier and a simplistic interface for using it. The user can use the interface (which is directly built on the website) to upload and classify new images. The user never has to worry about any of the “magic” that goes on in the backend. ### Magic Flash™ - The Backend The front end’s connection to the backend sets up a Train image folder on the server. The name of each folder is the category that the pictures inside of it belong to. The backend will take the folder and transfer it into a CSV file. From this CSV file it creates a [Pytorch.utils.Dataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html) Class by inheriting Dataset and overriding three of its methods. When the dataset is ready, the data is augmented using a combination of 6 different augmentations: CenterCrop, ColorJitter, RandomAffine, RandomRotation, and Normalize. By doing those transformations, we ~10x the amount of data that we have for training. Once the data is in a CSV file and has been augmented, we are ready for training. We import [SqueezeNet](https://pytorch.org/hub/pytorch_vision_squeezenet/) and use transfer learning to adjust it to our categories. What this means is that we erase the last layer of the original net, which was originally trained for ImageNet (1000 categories), and initialize a layer of size equal to the number of categories that the user defined, making sure to accurately match dimensions. We then run back-propagation on the network with all the weights “frozen” in place with exception to the ones in the last layer. As the model is training, it is informing the front-end that progress being made, allowing us to display a progress bar. Once the model converges, the final model is saved into a file that the API can easily call for inference (classification) when the user asks for prediction on new data. ## How we built it The website was built using Node.js and javascript and it has three main functionalities: allowing the user to upload pictures into folders, sending the picture folders to the backend, making API calls to classify new images once the model is ready. The backend is built in Python and PyTorch. On top of Python we used torch, torch.nn as nn, torch.optim as optim, torch.optim import lr\_scheduler, numpy as np, torchvision, from torchvision import datasets, models, and transforms, import matplotlib.pyplot as plt, import time, import os, import copy, import json, import sys. ## Accomplishments that we're proud of Going into this, we were not sure if 36 hours were enough to build this product. The proudest moment of this project was the successful testing round at 1am on Sunday. While we had some machine learning experience on our team, none of us had experience with transfer learning or this sort of web application. At the beginning of our project, we sat down, learned about each task, and then drew a diagram of our project. We are especially proud of this step because coming into the coding portion with a clear API functionality understanding and delegated tasks saved us a lot of time and helped us integrate the final product. ## Obstacles we overcame and what we learned Machine learning models are often finicky in their training patterns. Because our application allows users with little experience, we had to come up with a robust training pipeline. Designing this pipeline took a lot of thought and a few failures before we converged on a few data augmentation techniques that do the trick. After this hurdle, integrating a deep learning backend with the website interface was quite challenging, as the training pipeline requires very specific labels and file structure. Iterating the website to reflect the rigid protocol without over complicating the interface was thus a challenge. We learned a ton over this weekend. Firstly, getting the transfer learning to work was enlightening as freezing parts of the network and writing a functional training loop for specific layers required diving deep into the pytorch API. Secondly, the human-design aspect was a really interesting learning opportunity as we had to come up with the right line between abstraction and functionality. Finally, and perhaps most importantly, constantly meeting and syncing code taught us the importance of keeping a team on the same page at all times. ## What's next for Flash Computer Vision ### Application companion + Machine Learning on the Edge We want to build a companion app with the same functionality as the website. The companion app would be even more powerful than the website because it would have the ability to quantize models (compression for ml) and to transfer them into TensorFlow Lite so that **models can be stored and used within the device.** This would especially benefit people in developing countries, where they sometimes cannot depend on having a cellular connection. ### Charge to use We want to make a payment system within the website so that we can scale without worrying about computational cost. We do not want to make a business model out of charging per API call, as we do not believe this will pave a path forward for rapid adoption. **We want users to own their models, this will be our competitive differentiator.** We intentionally used a smaller neural network to reduce hosting and decrease inference time. Once we compress our already small models, this vision can be fully realized as we will not have to host anything, but rather return a mobile application.
## Inspiration I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post. CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs. ## What it does We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution. ## How I built it Wow. So lots of stuff. Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on. ## ML stuff now. Where the challenges begin We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path. We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px. However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost. We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error. Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution. ## Accomplishments that I'm proud of Building it good. ## What I learned Balanced approaches and leveraging past learning ## What's next for Crystallize Real time stream-enhance app.
## Inspiration Color vision deficiency is the reduced ability to identify or differentiate colors. People having color vision deficiency have several difficulties in performing day to day activities from basic ones like reading traffic lights and signals, purchasing groceries or clothes to advanced ones like accessing technology and education. It leads to undesirable outcomes for instance; color confusion, unintended purchasing, incorrect interpretation of color encoded information, difficulty in learning and technology use. People going through color vision deficiency could experience frustration and stress associated with it. ## What it does Color Tag's motive is to alleviate the color confusion by labelling the colors of the image. When a user hovers over the image, Color Tag labels the color of the area of the image that the user is currently pointing to. This enable the individuals experiencing color vision deficiency to make more informed decisions, make technology and education easier to access and more accurate interpretation of color encoded information(maps, charts, graphs etc). ## How we built it This application is primarily built using python using computer vision library OpenCV. Pandas, numpy and matplotlib are the other libraries used. ## Challenges we ran into I have been programming with Python, but this is the first time I am using OpenCV and Flask. Challenges were in identifying and applying the specific APIs per the project needs within the timespan of the hackathon event. ## Accomplishments that we're proud of The greatest accomplishment was identifying a problem statement and develop a fully functional solution within the timeframe of the hackathon. The application is highly responsive and easy to use which were the primary goals. ## What we learned The positive impact this solution could have in a color vision deficient person's life. ## What's next for Color Tag As the next level, * I am planning to integrate this application with cameras, scanners and spectacles. * This feature could be enabled/disabled on need basis. * Fine tune color classification depending on specific types of color vision deficiencies including Duetan, Protan, Tritan, and Achromatopsia. * Develop ML and Deep Learning models for dynamic pictures.
winning
## Inspiration It’s insane how the majority of the U.S. still looks like **endless freeways and suburban sprawl.** The majority of Americans can’t get a cup of coffee or go to the grocery store without a car. What would America look like with cleaner air, walkable cities, green spaces, and effective routing from place to place that builds on infrastructure for active transport and micro mobility? This is the question we answer, and show, to urban planners, at an extremely granular street level. Compared to most of Europe and Asia (where there are public transportation options, human-scale streets, and dense neighborhoods) the United States is light-years away ... but urban planners don’t have the tools right now to easily assess a specific area’s walkability. **Here's why this is an urgent problem:** Current tools for urban planners don’t provide *location-specific information* —they only provide arbitrary, high-level data overviews over a 50-mile or so radius about population density. Even though consumers see new tools, like Google Maps’ area busyness bars, it is only bits and pieces of all the data an urban planner needs, like data on bike paths, bus stops, and the relationships between traffic and pedestrians. As a result, there’s very few actionable improvements that urban planners can make. Moreover, because cities are physical spaces, planners cannot easily visualize what an improvement (e.g. adding a bike/bus lane, parks, open spaces) would look like in the existing context of that specific road. Many urban planners don’t have the resources or capability to fully immerse themselves in a new city, live like a resident, and understand the problems that residents face on a daily basis that prevent them from accessing public transport, active commutes, or even safe outdoor spaces. There’s also been a significant rise in micro-mobility—usage of e-bikes (e.g. CityBike rental services) and scooters (especially on college campuses) are growing. Research studies have shown that access to public transport, safe walking areas, and micro mobility all contribute to greater access of opportunity and successful mixed income neighborhoods, which can raise entire generations out of poverty. To continue this movement and translate this into economic mobility, we have to ensure urban developers are **making space for car-alternatives** in their city planning. This means bike lanes, bus stops, plaza’s, well-lit sidewalks, and green space in the city. These reasons are why our team created CityGO—a tool that helps urban planners understand their region’s walkability scores down to the **granular street intersection level** and **instantly visualize what a street would look like if it was actually walkable** using Open AI's CLIP and DALL-E image generation tools (e.g. “What would the street in front of Painted Ladies look like if there were 2 bike lanes installed?)” We are extremely intentional about the unseen effects of walkability on social structures, the environments, and public health and we are ecstatic to see the results: 1.Car-alternatives provide economic mobility as they give Americans alternatives to purchasing and maintaining cars that are cumbersome, unreliable, and extremely costly to maintain in dense urban areas. Having lower upfront costs also enables handicapped people, people that can’t drive, and extremely young/old people to have the same access to opportunity and continue living high quality lives. This disproportionately benefits people in poverty, as children with access to public transport or farther walking routes also gain access to better education, food sources, and can meet friends/share the resources of other neighborhoods which can have the **huge** impact of pulling communities out of poverty. Placing bicycle lanes and barriers that protect cyclists from side traffic will encourage people to utilize micro mobility and active transport options. This is not possible if urban planners don’t know where existing transport is or even recognize the outsized impact of increased bike lanes. Finally, it’s no surprise that transportation as a sector alone leads to 27% of carbon emissions (US EPA) and is a massive safety issue that all citizens face everyday. Our country’s dependence on cars has been leading to deeper issues that affect basic safety, climate change, and economic mobility. The faster that we take steps to mitigate this dependence, the more sustainable our lifestyles and Earth can be. ## What it does TLDR: 1) Map that pulls together data on car traffic and congestion, pedestrian foot traffic, and bike parking opportunities. Heat maps that represent the density of foot traffic and location-specific interactive markers. 2) Google Map Street View API enables urban planners to see and move through live imagery of their site. 3) OpenAI CLIP and DALL-E are used to incorporate an uploaded image (taken from StreetView) and descriptor text embeddings to accurately provide a **hyper location-specific augmented image**. The exact street venues that are unwalkable in a city are extremely difficult to pinpoint. There’s an insane amount of data that you have to consolidate to get a cohesive image of a city’s walkability state at every point—from car traffic congestion, pedestrian foot traffic, bike parking, and more. Because cohesive data collection is extremely important to produce a well-nuanced understanding of a place’s walkability, our team incorporated a mix of geoJSON data formats and vector tiles (specific to MapBox API). There was a significant amount of unexpected “data wrangling” that came from this project since multiple formats from various sources had to be integrated with existing mapping software—however, it was a great exposure to real issues data analysts and urban planners have when trying to work with data. There are three primary layers to our mapping software: traffic congestion, pedestrian traffic, and bicycle parking. In order to get the exact traffic congestion per street, avenue, and boulevard in San Francisco, we utilized a data layer in MapBox API. We specified all possible locations within SF and made requests for geoJSON data that is represented through each Marker. Green stands for low congestion, yellow stands for average congestion, and red stands for high congestion. This data layer was classified as a vector tile in MapBox API. Consolidating pedestrian foot traffic data was an interesting task to handle since this data is heavily locked in enterprise software tools. There are existing open source data sets that are posted by regional governments, but none of them are specific enough that they can produce 20+ or so heat maps of high foot traffic areas in a 15 mile radius. Thus, we utilized Best Time API to index for a diverse range of locations (e.g. restaurants, bars, activities, tourist spots, etc.) so our heat maps would not be biased towards a certain style of venue to capture information relevant to all audiences. We then cross-validated that data with Walk Score (the most trusted site for gaining walkability scores on specific addresses). We then ranked these areas and rendered heat maps on MapBox to showcase density. San Francisco’s government open sources extremely useful data on all of the locations for bike parking installed in the past few years. We ensured that the data has been well maintained and preserved its quality over the past few years so we don’t over/underrepresent certain areas more than others. This was enforced by recent updates in the past 2 months that deemed the data accurate, so we added the geographic data as a new layer on our app. Each bike parking spot installed by the SF government is represented by a little bike icon on the map! **The most valuable feature** is the user can navigate to any location and prompt CityGo to produce a hyper realistic augmented image resembling that location with added infrastructure improvements to make the area more walkable. Seeing the StreetView of that location, which you can move around and see real time information, and being able to envision the end product is the final bridge to an urban developer’s planning process, ensuring that walkability is within our near future. ## How we built it We utilized the React framework to organize our project’s state variables, components, and state transfer. We also used it to build custom components, like the one that conditionally renders a live panoramic street view of the given location or render information retrieved from various data entry points. To create the map on the left, our team used MapBox’s API to style the map and integrate the heat map visualizations with existing data sources. In order to create the markers that corresponded to specific geometric coordinates, we utilized Mapbox GL JS (their specific Javascript library) and third-party React libraries. To create the Google Maps Panoramic Street View, we integrated our backend geometric coordinates to Google Maps’ API so there could be an individual rendering of each location. We supplemented this with third party React libraries for better error handling, feasibility, and visual appeal. The panoramic street view was extremely important for us to include this because urban planners need context on spatial configurations to develop designs that integrate well into the existing communities. We created a custom function and used the required HTTP route (in PHP) to grab data from the Walk Score API with our JSON server so it could provide specific Walkability Scores for every marker in our map. Text Generation from OpenAI’s text completion API was used to produce location-specific suggestions on walkability. Whatever marker a user clicked, the address was plugged in as a variable to a prompt that lists out 5 suggestions that are specific to that place within a 500-feet radius. This process opened us to the difficulties and rewarding aspects of prompt engineering, enabling us to get more actionable and location-specific than the generic alternative. Additionally, we give the user the option to generate a potential view of the area with optimal walkability conditions using a variety of OpenAI models. We have created our own API using the Flask API development framework for Google Street View analysis and optimal scene generation. **Here’s how we were able to get true image generation to work:** When the user prompts the website for a more walkable version of the current location, we grab an image of the Google Street View and implement our own architecture using OpenAI contrastive language-image pre-training (CLIP) image and text encoders to encode both the image and a variety of potential descriptions describing the traffic, use of public transport, and pedestrian walkways present within the image. The outputted embeddings for both the image and the bodies of text were then compared with each other using scaled cosine similarity to output similarity scores. We then tag the image with the necessary descriptors, like classifiers,—this is our way of making the system understand the semantic meaning behind the image and prompt potential changes based on very specific street views (e.g. the Painted Ladies in San Francisco might have a high walkability score via the Walk Score API, but could potentially need larger sidewalks to further improve transport and the ability to travel in that region of SF). This is significantly more accurate than simply using DALL-E’s set image generation parameters with an unspecific prompt based purely on the walkability score because we are incorporating both the uploaded image for context and descriptor text embeddings to accurately provide a hyper location-specific augmented image. A descriptive prompt is constructed from this semantic image analysis and fed into DALLE, a diffusion based image generation model conditioned on textual descriptors. The resulting images are higher quality, as they preserve structural integrity to resemble the real world, and effectively implement the necessary changes to make specific locations optimal for travel. We used Tailwind CSS to style our components. ## Challenges we ran into There were existing data bottlenecks, especially with getting accurate, granular, pedestrian foot traffic data. The main challenge we ran into was integrating the necessary Open AI models + API routes. Creating a fast, seamless pipeline that provided the user with as much mobility and autonomy as possible required that we make use of not just the Walk Score API, but also map and geographical information from maps + google street view. Processing both image and textual information pushed us to explore using the CLIP pre-trained text and image encoders to create semantically rich embeddings which can be used to relate ideas and objects present within the image to textual descriptions. ## Accomplishments that we're proud of We could have done just normal image generation but we were able to detect car, people, and public transit concentration existing in an image, assign that to a numerical score, and then match that with a hyper-specific prompt that was generating an image based off of that information. This enabled us to make our own metrics for a given scene; we wonder how this model can be used in the real world to speed up or completely automate the data collection pipeline for local governments. ## What we learned and what's next for CityGO Utilizing multiple data formats and sources to cohesively show up in the map + provide accurate suggestions for walkability improvement was important to us because data is the backbone for this idea. Properly processing the right pieces of data at the right step in the system process and presenting the proper results to the user was of utmost importance. We definitely learned a lot about keeping data lightweight, easily transferring between third-party softwares, and finding relationships between different types of data to synthesize a proper output. We also learned quite a bit by implementing Open AI’s CLIP image and text encoders for semantic tagging of images with specific textual descriptions describing car, public transit, and people/people crosswalk concentrations. It was important for us to plan out a system architecture that effectively utilized advanced technologies for a seamless end-to-end pipeline. We learned about how information abstraction (i.e. converting between images and text and finding relationships between them via embeddings) can play to our advantage and utilizing different artificially intelligent models for intermediate processing. In the future, we plan on integrating a better visualization tool to produce more realistic renders and introduce an inpainting feature so that users have the freedom to select a specific view on street view and be given recommendations + implement very specific changes incrementally. We hope that this will allow urban planners to more effectively implement design changes to urban spaces by receiving an immediate visual + seeing how a specific change seamlessly integrates with the rest of the environment. Additionally we hope to do a neural radiance field (NERF) integration with the produced “optimal” scenes to give the user the freedom to navigate through the environment within the NERF to visualize the change (e.g. adding a bike lane or expanding a sidewalk or shifting the build site for a building). A potential virtual reality platform would provide an immersive experience for urban planners to effectively receive AI-powered layout recommendations and instantly visualize them. Our ultimate goal is to integrate an asset library and use NERF-based 3D asset generation to allow planners to generate realistic and interactive 3D renders of locations with AI-assisted changes to improve walkability. One end-to-end pipeline for visualizing an area, identifying potential changes, visualizing said changes using image generation + 3D scene editing/construction, and quickly iterating through different design cycles to create an optimal solution for a specific locations’ walkability as efficiently as possible!
# DriveWise: Building a Safer Future in Route Planning Motor vehicle crashes are the leading cause of death among teens, with over a third of teen fatalities resulting from traffic accidents. This represents one of the most pressing public safety issues today. While many route-planning algorithms exist, most prioritize speed over safety, often neglecting the inherent risks associated with certain routes. We set out to create a route-planning app that leverages past accident data to help users navigate safer routes. ## Inspiration The inexperience of young drivers contributes to the sharp rise in accidents and deaths as can be seen in the figure below. ![Injuries and Deaths in Motor Vehicle Crashes](https://raw.githubusercontent.com/pranavponnusamy/Drivewise/refs/heads/main/AccidentsByAge.webp) This issue is further intensified by challenging driving conditions, road hazards, and the lack of real-time risk assessment tools. With limited access to information about accident-prone areas and little experience on the road, new drivers often unknowingly enter high-risk zones—something traditional route planners like Waze or Google Maps fail to address. However, new drivers are often willing to sacrifice speed for safer, less-traveled routes. Addressing this gap requires providing insights that promote safer driving choices. ## What It Does We developed **DriveWise**, a route-planning app that empowers users to make informed decisions about the safest routes. The app analyzes 22 years of historical accident data and utilizes a modified A\* heuristic for personalized planning. Based on this data, it suggests alternative routes that are statistically safer, tailoring recommendations to the driver’s skill level. By factoring in variables such as driver skill, accident density, and turn complexity, we aim to create a comprehensive tool that prioritizes road safety above all else. ### How It Works Our route-planning algorithm is novel in its incorporation of historical accident data directly into the routing process. Traditional algorithms like those used by Google Maps or Waze prioritize the shortest or fastest routes, often overlooking safety considerations. **DriveWise** integrates safety metrics into the edge weights of the routing graph, allowing the A\* algorithm to favor routes with lower accident risk. **Key components of our algorithm include:** * **Accident Density Mapping**: We map over 3.1 million historical accident data points to the road network using spatial queries. Each road segment is assigned an accident count based on nearby accidents. * **Turn Penalties**: Sharp turns are more challenging for new drivers and have been shown to contribute to unsafe routes. We calculate turn angles between road segments and apply penalties for turns exceeding a certain threshold. * **Skillfulness Metric**: We introduce a driver skill level parameter that adjusts the influence of accident risk and turn penalties on route selection. New drivers are guided through safer, simpler routes, while experienced drivers receive more direct paths. * **Risk-Aware Heuristic**: Unlike traditional A\* implementations that use distance-based heuristics, we modify the heuristic to account for accident density, further steering the route away from high-risk areas. By integrating these elements, **DriveWise** offers personalized route recommendations that adapt as the driver's skill level increases, ultimately aiming to reduce the likelihood of accidents for new drivers. ## Accomplishments We're Proud Of We are proud of developing an algorithm that not only works effectively but also has the potential to make a real difference in road safety. Creating a route-planning tool that factors in historical accident data is, to our knowledge, a novel approach in this domain. We successfully combined complex data analysis with an intuitive user interface, resulting in an app that is both powerful and user-friendly. We are also kinda proud about our website. Learn more about us at [idontwannadie.lol](https://idontwannadie.lol/) ## Challenges We Faced This was one of our first hackathons, and we faced several challenges. Having never deployed anything before, we spent a significant amount of time learning, debugging, and fixing deployment issues. Designing the algorithm to analyze accident patterns while keeping the route planning relatively simple added considerable complexity. We had to balance predictive analytics with real-world usability, ensuring that the app remained intuitive while delivering sophisticated results. Another challenge was creating a user interface that encourages engagement without overwhelming the driver. We wanted users to trust the app’s recommendations without feeling burdened by excessive information. Striking the right balance between simplicity and effectiveness through gamified metrics proved to be an elegant solution. ## What We Learned We learned a great deal about integrating large datasets into real-time applications, the complexities of route optimization algorithms, and the importance of user-centric design. Working with the OpenStreetMap and OSMnx libraries required a deep dive into geospatial analysis, which was both challenging and rewarding. We also discovered the joys and pains of deploying an application, from server configurations to domain name setups. ## Future Plans In the future, we see the potential for **DriveWise** to go beyond individual drivers and benefit broader communities. Urban planners, law enforcement agencies, and policymakers could use aggregated data to identify high-risk areas and make informed decisions about where to invest in road safety improvements. By expanding our dataset and refining our algorithms, we aim to make **DriveWise** functional in more regions and for a wider audience. ## Links * **Paper**: [Mathematical Background](https://drive.google.com/drive/folders/1Q9MRjBWQtXKwtlzObdAxtfBpXgLR7yfQ?usp=sharing) * **GitHub**: [DriveWise Repository](https://github.com/pranavponnusamy/Drivewise) * **Website**: [idontwannadie.lol](https://idontwannadie.lol/) * **Video Demo**: [DriveWise Demo](https://www.veed.io/view/81d727bc-ed6b-4bba-95c1-97ed48b1738d?panel=share)
## Inspiration Just Dance was probably my favorite game as a kid. Not only because I got to listen to Psy a million times a day, but because I loved to run and hop and dance around. At this hackathon, it was an inspiration for us to try and build something similar for everyone: a pose guiding app that helped people get active. ## What it does Yoga Yogi is the everyman’s yoga guide. Using pose recognition, Yoga Yogi recognizes your yoga poses and gives you individualized feedback on how to improve! Not only do we feature a login, you can design your very own yoga routine individualized to poses and duration goals. Once you are ready to start, Yoga Yogi will walk you through a guided yoga session, where you will receive live feedback and helpful tips. ## How we built it Our project connected multiple moving parts, including a React frontend, a pose recognition model, and a Flask backend. Using Flask had huge importance in our project as it connected our Voiceflow and AI backend to our frontend React elements. Voiceflow was fun to use as it was very easy to create user help agents as well as AI responses specific for programmed API calls. Collecting our data and training our model took several steps. To collect our data, we used mediapipe and cv2 to identify the major nodes of people in yoga poses and transformed these into graphs. We then used this dataset to train a graph neural network, which allowed us to classify pose images into distinct classes. Lastly, we used Voiceflow to generate customized feedback based on real time video. ## Challenges we ran into Since this was our first time working together collaboratively on Github, we struggled quite a lot with figuring out what merge meant and what branches were and whatever rebase meant. But after we got the hang of that, the next obstacle we faced was smoothly running the front and backend together, but not because of technical difficulties, but because of our difficulties with communicating with each other. All of us had unique skill sets, and we oftentimes struggled with communicating what our programs needed and outputted. This slowed us down a considerable amount, but despite this, we were able to make a project that was greater than our sum. ## Accomplishments that we're proud of Sean: As a returner hacker, I am very happy that this project was quite functional and was impressed with all the stuff we were able to fit in these few days utilizing the right planning, skills and even a bit of luck. I'm also glad that I played a crucial part in the project as well as being able to help the team in countless ways through my experience. I feel that we worked very well as a team and really appreciate HTN for giving me the chance to learn and develop another amazing project! Chris: I’m really happy with how simple and streamlined our UI is! It feels like a real app you would find online. This was the first time for me to work in a large group at a sprint pace, and although at some times I felt really tired, I really enjoyed my experience and would come again. Steven: I am really happy with the learning progress I have made during the 36 hours. I had almost no experience coming in with web design and with using React and other frameworks. I was able to learn so much on the spot and reflecting on the past 2 days of work, I feel accomplished and thankful for this opportunity. I want to also thank my team members for helping me with so much and teaching me new pieces of coding knowledge that I would not have known. Nirvan: I am very proud of the team effort and collaboration that we had the entire time. We worked really hard and supported each other whenever we had an error particularly those between the react frontend and flask backend. I am particularly proud of overcoming the issue of having multiple setIntervals in different components by passing props to a parent component and having one common useEffect to run all the setIntervals. ## What we learned A lot of front end and APIs! While the more technical parts are usually documented thoroughly, making a clean and satisfying frontend that we liked was much more creatively difficult than we thought. We each all had our own different thoughts and ideas about what we liked and what we wanted to put in, but making a project we were all proud of meant that we had to compromise on things that we didn’t want. ## What's next for Yoga Yogi We want to put Yoga Yogi on phones! This would allow Yoga Yogi to guide you through his practices, anytime, and anywhere. We also want to train Yoga Yogi on more poses, allowing Yoga Yogi to give you step by step feedback rather than broader statements.
winning
## Inspiration Waiting at Berkeley cross walks. ## What it does Plays "wait" whenever the pedestrian crossing button is clicked. ## How I built it HTML, CSS, JavaScript. ## Challenges I ran into Configuring the DNS, background music. ## Accomplishments that I'm proud of It works really well! ## What I learned DOM, OOP in JavaScript, setting up domain names. ## What's next for Berkeley Wait More images of Berkeley and its pedestrian crossings, different background music.
## Inspiration - When coming across a bunch of news articles of people sharing their assault stories years after the event, an idea to create a safe space where anonymous (and known) stories could be shared to help raise awareness. ## What it does - AREA is essentially a location flagger for every place. People can post their stories, either with their name or without, and can add a location to it that will then show up on the map. When someone is in a new place, they can look on AREA to see what the safe spots are. ## How we built it - Our web app is made up of HTML, CSS, Javascript, and Bootstrap. HTML was used to create all the divs in which different text boxes and images are and CSS was used to style the entire thing. We used Javascript for the parallax scrolling feature, animations of the buttons, and the google maps API that was embedded in the site. Bootstrap was our feature to preserve the contents of the page in an orderly way when the website was opened on a phone. ## Challenges we ran into - Creating buttons that were linked to different parts of the same webpage was a challenge, since we had first time HTML coders in the group. Also, implementing bootstrap properly was a challenge since it was also a new concept. Another challenge was facing the hard truth that our project could be seen as an attack to companies, but with a disclaimer on the site and the message of just wanting to help, we hope that corporations don't get affected. ## Accomplishments that we're proud of - We're proud of being able to create a project in the few hours we had and complete it since this is our first time working as a group and we are surrounded by college kids who have a lot more experience than us. ## What we learned - We learned that over ambitiousness can cause you to face disappointment but having high hopes is the best medium for motivation. We learned how good of a team we are, especially when we were well into the night and on almost no sleep since we just encouraged each other to keep working and make the most of the opportunity we were given. ## What's next for Area - Even though we created a web app, it is nowhere close to finish or perfect. Improvements using bootstrap, and adding some more features to make our website even better is the first course of action. Our second goal is to create a mobile app, both iOS and Android, where the stories posted by people can be seen on a map and features that are usually on a map app, such as directions, reviews, etc., can also be available. Furthering our ambitious idea, our third goal is to integrate our idea of location flagging into commonly used social media apps so that we reach our maximum audience and help raise awareness for an issue that is unfortunately so prevalent in our society.
## Inspiration Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus. ## What it does The user can find wheelchair accessible entrances with ease and get directions on where to find them. ## How we built it We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations. We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files. We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio. After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map. The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers. Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map. Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly. ## Challenges we ran into/Accomplishments We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add. A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together. We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths. We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT. ## What we learned We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities. ## What's next for HandyMap If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world.
losing
Starting a conversation can be tricky. Let us Help. Introducing RizzBot, the innovative conversation starter bot that helps you break the ice and meet new people. Whether you are at a party, networking event, or just looking to make new friends, RizzBot makes it easy to initiate conversations and connect with others. Simply input a person's hobbies or characteristics and RizzBot will generate a personalized conversation opener tailored to their interests. Say goodbye to awkward silences and hello to meaningful connections with RizzBot. RizzBot was created using the OpenAI API, which is also used by ChatGPT. The module is able to understand words by converting them into tokens which are the chunks of characters that are processed as input and sent as output by the AI. Using fine-tuning, we were able to train the AI using a custom data set, our dataset was designed to enhance outputs and create optimal conversation starters using a variety of different variables such as interests, personalities and passion. We used the Davinci model to train the AI as it is generally the most capable and can send out reasonable outputs with a short prompt. Our Frontend is a website covering the product section and product inspiration. With a link to access the RizzBot, where a user inputs a hobby or a characteristic of a person they would like to talk to, for example: "A person who is adventurous and athletic", then by clicking "Submit" the AI generates the personalized conversation starter. During the hackathon time, we were able to develop the Backend and train the bot and make the frontend. However, we were unable to connect them both on the interface that takes input and returns an output.
## Inspiration What is our first thought when we hear "health-care"? Is it an illness? Cancer? Disease? That is where we lose our focus from an exponential increasing crisis, especially in this post-COVID era. It is MENTAL HEALTH! Studying at university, I have seen my friends suffer from depression and anxiety looking for someone to hear them out for once. Statistically, an estimated 792 million individuals worldwide suffer from mental health diseases and concerns. That's roughly one out of every ten persons on the planet. In India, where I am from, the problem is even worse. Close to 14 per cent of India's population required active mental health interventions. Every year, about 2,00,000 Indians take their lives. The statistics are even higher if one starts to include the number of attempts to suicide. The thought of being able to save even a fraction of this number is powerful enough to get me working this hard for it. ## What it does Noor, TALKs because that's all it takes. She provides a comfortable environment to the user where they can share their thoughts very privately, and let that feelings out once and for all. ## How we built it I built this app in certain steps: 1. Converting all the convolutional intents into a Machine Learning Model - Pytorch. 2. Building a framework where users can provide input and the model can output the best possible response that makes the most sense. Here, the threshold is set to 90% accuracy. 3. Building an elegant GUI. ## Challenges we ran into Building Chatbots from scratch is extremely difficult. However, 36 hours were divided into sections where I could manage building a decent hack out of everything I got. Enhancing the bot's intelligence was challenging too. In the initial stages, I was experimenting with fewer intents, but with the addition of more intents, keeping track of the intents became difficult. ## Accomplishments that we're proud of First, I built my own Chatbot for the first time!!! YAYYYYY! This is a really special project because of the fact that it is dealing with such a major issue in the world right now. Also, this was my first time making an entire hackathon project using Python and its frameworks only. Extremely new experience. I am proud of myself for pushing through the frustrating times when I felt like giving up. ## What we learned Everything I made during this hackathon was something I had never done before. Legit, EVERYTHING! Let it be using NLP or Pytorch. Or even Tkinter for Graphic User Interface (GUI)! Honestly, may not be my best work ever but, definitely something that taught me the most! ## What's next for Noor Switch from Tkinter to deploying my script into an application or web app. The only reason I went with Tkinter was to try learning something new. I'll be using flutter for app development and TFjs for a web-based application. Discord: keivalya#8856
## Inspiration We are CS majors with no game and struggle to think of pickup lines for dating apps. ## What it does Rizzer is a chrome extension that generates 3 different pickup lines based off of a match's Tinder bio. To use Rizzer, open a DM with a match on Chrome, click the Rizzer Chrome extension, and then press "Generate". 3 AI generated pickup lines will then be shown to the user. The user can continue to generate more pickup lines until they find the perfect one. ## How we built it We used OpenAI's GPT-3 API to generate the pick up lines. We used JavaScript and HTML to build the Chrome extension and Tinder web scraper. ## Challenges we ran into We ran into many JavaScript debugging problems because it is the first time that we created a Chrome extension. We also had trouble differentiating different parts of Tinder bios when web scrapping since they are not labelled with IDs. ## Accomplishments that we're proud of We are proud that we created a working Chrome extension that recommends 3 pickup lines for Tinder. We are very happy with how it turned out and the quality of the pick up lines. ## What we learned We learned how to create a Chrome extension and build a web scraper. JavaScript was also very new to us all, so we greatly improved our JavaScript skills. ## What's next for Rizzer The next step for Rizzer is to publish the Chrome extension. ## Further progress We have implemented the ability to continue an ongoing conversation in addition to creating pickup lines. We are currently working on an app using Flutter which can process text from screenshots. After talking to potential customers, we realized that more people would use the product as an app rather than a Chrome extension.
losing
## Inspiration The inspiration behind HumanFT comes from the desire to revolutionize the way people receive feedback and approach personal development. The project aims to harness the power of advanced technology to provide individuals, educational institutions, and organizations with a comprehensive feedback system that can drive positive change and improvement in various aspects of life. ## What it does HumanFT serves as a multifaceted platform that collects, analyzes, and delivers feedback to users across different domains. It offers a central hub for personal development, empowers educators and students to enhance the learning experience, and enables organizations to optimize workplace performance. By leveraging data-driven insights and gamification, HumanFT engages users in a meaningful journey of self-improvement. ## How we built it HumanFT is built upon a foundation of cutting-edge technology, including machine learning and AI algorithms. It combines a user-friendly interface with robust data analysis to ensure efficient feedback delivery. Privacy and security are fundamental aspects of its construction, ensuring that user data remains confidential and protected. ## Challenges we ran into Developing HumanFT presented several challenges, including the integration of gamification elements, the development of secure data handling processes, and the creation of a dynamic and engaging user experience. Overcoming these obstacles required a dedicated team effort and continuous innovation. ## Accomplishments that we're proud of One of our proudest accomplishments with HumanFT is the creation of a thriving community of individuals who are passionate about personal development and feedback. We've also successfully integrated gamification elements to keep users engaged and motivated on their journey towards self-improvement. ## What we learned Throughout the development of HumanFT, we've learned the significance of personalized feedback in driving positive change. We've also gained valuable insights into the power of data-driven recommendations and the importance of maintaining user privacy and security. ## What's next for HumanFT The future of HumanFT holds exciting possibilities. We aim to expand its reach and impact, incorporating more domains, refining the user experience, and continuously improving the AI algorithms that drive feedback and recommendations. Additionally, we plan to further strengthen the HumanFT community, fostering connections and support among like-minded individuals on their journey of self-improvement.
## Inspiration🧠 Even with today’s cutting edge technology and leading scientific research that helps us develop, advance and improve in everyday life, those with rare genetic diseases are still left behind. Living with life threatening condition with little to no cure, considering, “less than 5% of more than 7,000 rare diseases believed to affect humans currently have an effective treatment”, is already frustrating, but when doctors aren’t knowledgeable/experienced enough to treat such cases, or when patients have only themselves to rely on to search for any experimental drugs, the everyday struggle becomes a nightmare to deal with. But what’s even more tragic is despite there being “300 million people worldwide [suffering a rare disease], [where] approximately 4% of the total world population is affected by [one] at any given time” , people still have to go through the exhausting trial and error process of finding a cure/treatment, EVEN when in several cases, they share exactly the same disease! Shockingly enough, there isn’t ANY collection of data or analysis being shared, on what medications/treatments work for different people, and which ones help or harm them! **Citation** Kaufmann, P., Pariser, A.R. & Austin, C. From scientific discovery to treatments for rare diseases – the view from the National Center for Advancing Translational Sciences – Office of Rare Diseases Research. Orphanet J Rare Dis 13, 196 (2018). <https://doi.org/10.1186/s13023-018-0936-x> Wakap SN, Lambert DM, Alry A, et al. Estimating cumulative point prevalence of rare diseases: analysis of the Orphanet database [published online September 16, 2019]. Eur J Hum Genet. doi: 10.1038/s41431-019-0508-0. <https://ojrd.biomedcentral.com/articles/10.1186/s13023-018-0936-x> ## What it does 💻 For our project, we have tried our best to match Varient’s goal in partially helping develop a diagnosis assistance tool for the rare disease population (with genetic mutations), so that it becomes a crucial gadget in finding appropriate drug treatments, providing accurate and up to date information, while also facilitating support in decision making. Our My Heroes gene assistant web app’s specific features include: Ability to select images that indicate a relevant gene in the report Generating and displaying relevant keywords, such as names of related disease and mutated gene names. Providing insights on how the related disease can be treated. Supporting patients understand key information from the reports. The user interface includes: a User registration/login (for authorization and account information purposes), a Dropbox/file attachment ( for images), a Catalog of uploads (for the usability of modifying/deleting items), Display of labeled/annotated report, and a Summary page ## How we built it 🔧 1. Used Python for the backend and Machine learning component of the app. 2. Implemented pytesseract OCR to extract texts/key words (mutation names on ) from the images(image labeling) supplied by the report, and labeled them with OpenCV, along with; 3. Using spacy’s en\_ner\_bionlp13cg\_md (pretrained NLP model for medical report text processing) to extract relevant keywords from the texts. 4. Used streamlit library to deploy the machine learning web app. 5. Worked with React.js for frontend (login, signup, the navbar, settings), Firebase for User authentication and Google authentication integration and Firestore (NoSQL database) implementation as well as storage. 6. We utilized Google Docs/Discord for brainstorming, and Trello for distributing and keeping track of time and tasks assigned. 7. Utilized Figma for designing and prototyping. ## Challenges we ran into 🔥 1. Familiarizing ourselves with Figma to build a complex but easy to use medical record health app. 2. We had trouble integrating the NLP model part to the frontend and ended up using streamlit to make the backend functional. 3. Even though we were aware the Machine Learning part would take a significant chunk of our time, we didn't realize just how much it actually did. We also required and were working with all hands on deck which prevented us from other tasks. 4. With one of our members being a novice programmer and involved with another large scale event commitment taking place at the same time as this event, we were short one team member 5. Another team member lacking significant experience in Machine Learning and related technologies resulted in a lack of cohesiveness throughout the process. 6. None of us was familiar with how to use Flask, and only one of us was familiar with REST API’s. We also had several issues integrating with the frontend (connecting API’s, sending post requests, getting data back), and had to figure out an alternative solution by using Streamlit to display images, modify it using functions in Python, and display the new image and extracted keywords. We also had issues deploying the streamlit app, as we kept getting errors. ## Accomplishments that we're proud of 💪 We are proud of being able to collaborate and work together despite our overall lack of experience in Machine Learning and differences in previous experiences within the team mates. We are also proud to have built a functional ML app, and make it usable to the user because we spent most of our time getting the NLP to work. **How to run the app** Pytesseract For windows: Via <https://github.com/UB-Mannheim/tesseract/wiki> For mac Download and install the spacy model: Download en\_ner\_bionlp13cg\_md via <https://allenai.github.io/scispacy/> pip install spacy Pip install ## What we learned ✍️ . Restoring the health of the patients by streamlining the process and help the doctors provide the best treatment for such specific and rare diseases. (Our app could be used as an assistant (AI assistant?) Or personal record tracker or personal assistant 2. It facilitates universal information sharing, and keeps all the data in one place (some people might get private treatments which don't require the use if a health card, so they can input their info in this central platform for easier, quicker,and efficient process. ## What's next for My Heroes ✨ Integrating the machine learning app to the frontend so that the app can have actual users and a smooth, simple UI Design . To improve the accessibility features of the app. We would love to see our app to be in the hands of our patiently waiting users as soon as possible! We hope that with its improvements, it helps them provide some peace of mind, and hopefully makes life easy for them.
## Inspiration The inspiration for this project came from our personal situations at home, that is, the fact that we are now always at home. We came to realize that since we spend all day at home as students with remote learning, it has become more difficult to hold our regular routines, and as such we find ourselves taking care of our physical and mental wellbeing a lot less. In addition to that, we have seen our productivity plummet outside of a dedicated learning environment, making online school all the more difficult. As such, we decided to make an application that remote students or workers like us (or anybody else for that matter) could use to help them stay on track of their personal wellbeing and productivity. ## What it does Our application promotes physical and mental wellbeing, as well as productivity, by giving the user new challenges every day (that relate to the user) to encourage them to take care of what needs to be cared for. Every day, the user receives several challenges in each category (physical wellbeing, mental wellbeing, and productivity), and each challenge carries a certain number of points that will be rewarded to the user upon the completion of said challenge. These points serve to fill each category's point bar (much like an experience bar in many video games), which then serves to level up the user. This gamification of daily tasks is how we hope to keep the user engaged, and just in case our users forget about our daily challenges, we send gentle (and perhaps lightly annoying) reminders in the form of notifications to keep the user focused and engaged. At the end of the day, we serve to help *them*. ## How we built it We build this application on the Android platform using Android studio since it was a technology none of us had really ever used before, making the experience an incredibly interesting challenge. Android studio uses Java or Kotlin, and we chose Java. We also used Google's Firebase for our authentication and database (all in the cloud). ## Challenges we ran into Many challenges arose, mostly stemming from our lack of experience with the platform. Some of the biggest included database integration (firebase's real-time database turned out to be not as beginner-friendly as we had hoped), and designing the actual application (android studio's built-in XML visualizer is great but not nearly as intuitive as web programming, for instance). ## Accomplishments that we're proud of Our proudest accomplishment would have to be the integration between the authentication and the database through Firebase. While they do have a lot of built-in features to make android development easier, connecting those together and ensuring user data continuously gets stored and updated session after session was extremely gratifying once we finally got it to work. ## What we learned Aside from the obvious learning about Java and android development, we learned quite a bit about cloud authentication and databases, and perhaps most importantly, the importance of quality version control. Git is a national treasure and learning how to use it most efficiently may have been the greatest lesson out of this entire experience for us. ## What's next for Oliver - Your Personal Wellbeing Coach Since we are a group of friends who enjoy coding in their spare time, the plan is to polish Ollie up and put him on the Google play store. We will, of course, continue using it on our own, but we want to make sure our friends and everyone else in the world who could benefit can have access as well :)
partial