anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
## Inspiration
While talking with my sister about common problems she faces in the operating room, she mentioned she has trouble approximating how much blood a patient has lost during surgery. Blood can be suctioned using a suction tool, in which case it is easy to measure volume by observing liquid in the collecting container. However, when blood is absorbed using surgical gauze sponges, it is much harder to estimate the volume of blood lost. This information is important for doctors, nurses, and anesthesiologists to know. It helps them prepare for post-operative preventative measures, and indicates if an intra-operative blood transfusion is required to prevent complications. Currently, the standard used is very crude estimation using weight/volume calculations by hand - which is time-consuming during a life-or-death operation.
## What it does
Our code receives input in the form of pictures of surgical sponges, and outputs to the screen the number of sponges processed and an estimation of the total blood volume lost.
## How I built it
We used Java to create three components: a GUI, a "Pic" class and a "Sponge" class. The Pic class analyzes an image file pixel by pixel, and gets a proportion of red versus red & white pixels to analyze how soaked the sponge is. It also takes the saturation level of the red colour into account. The proportion of gauze saturated is then passed to a Sponge object. There is a method to multiply the maximum saturation capacity of a fixed sponge size by this proportion in order to determine a volume absorbed. The values used were obtained from a study in a journal article (<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5003499/>).
## Challenges I ran into
Initially, we wanted to receive input as a continuous video stream and use object recognition/AI. However, this proved to be difficult, especially because we did not have all the hardware we needed to build the kind of device we imagined. We decided instead to simplify our concept and use photos as input for now to demonstrate our idea.
## Accomplishments that I'm proud of
We're proud of how we attempted to solve our problem initially using our "big ideas," but when we ran into obstacles, we adjusted our plan to still prove our concept but in a simpler way in its earlier stages.
## What I learned
We learned how to analyze pictures and obtain important info about them to be used elsewhere - ie analyzing a photo pixel by pixel. We also learnt how to use external resources to supplement our prior knowledge when working on new challenges.
## What's next for The Blood Bot
We would love to have live-time video footage as the input for our program, making use out of AI and object recognition. It would also be beneficial to have settings to change hospital standards of sponge sizes and corresponding volume values. Finally, we want to integrate our device to consider values from other blood volume sources as well, like the values received from the suction collection container, in order for the number outputted to be a comprehensive total of blood volume lost.
|
## Inspiration
Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med!
## What it does
This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data.
## How we built it
We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project.
## Challenges we ran into
We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left.
## Accomplishments that we're proud of
We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting.
## What we learned
We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha!
## What's next for EZ-Med
The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
|
# Check out our [slides](https://docs.google.com/presentation/d/1K41ArhGy6HgdhWuWSoGtBkhscxycKVTnzTSsnapsv9o/edit#slide=id.g30ccbcf1a6f_0_150) and come over for a demo!
## Inspiration
The inspiration for EYEdentity came from the need to enhance patient care through technology. Observing the challenges healthcare professionals face in quickly accessing patient information, we envisioned a solution that combines facial recognition and augmented reality to streamline interactions and improve efficiency.
## What it does
EYEdentity is an innovative AR interface that scans patient faces to display their names and critical medical data in real-time. This technology allows healthcare providers to access essential information instantly, enhancing the patient experience and enabling informed decision-making on the spot.
## How we built it
We built EYEdentity using a combination of advanced facial recognition and facial tracking algorithms and the new Snap Spectacles. The facial recognition component was developed using machine learning techniques to ensure high accuracy, while the AR interface was created using cutting-edge software tools that allow for seamless integration of data visualization in a spatial format. Building on the Snap Spectacles provided us with a unique opportunity to leverage their advanced AR capabilities, resulting in a truly immersive user experience.
## Challenges we ran into
One of the main challenges we faced was ensuring the accuracy and speed of the facial recognition system in various lighting conditions and angles. Additionally, integrating real-time data updates into the AR interface required overcoming technical hurdles related to data synchronization and display.
## Accomplishments that we're proud of
We are proud of successfully developing a prototype that demonstrates the potential of our technology in a real-world healthcare setting. The experience of building on the Snap Spectacles allowed us to create a user experience that feels natural and intuitive, making it easier for healthcare professionals to engage with patient data.
## What we learned
Throughout the development process, we learned the importance of user-centered design in healthcare technology. Communicating with healthcare professionals helped us understand their needs and refine our solution to better serve them. We also gained valuable insights into the technical challenges of integrating AR with real-time data.
## What's next for EYEdentity
Moving forward, we plan to start testing in clinical environments to gather more feedback and refine our technology. Our goal is to enhance the system's capabilities, expand its features, and ultimately deploy EYEdentity in healthcare facilities to revolutionize patient care.
|
winning
|
## Inspiration 🌎
One of the challenges in our multicultural country is providing healthcare to non-verbal or non-English speaking patients.
Health care professionals often face a challenge when communicating with these patients. Bedside workers are not provided a translator and often rely on the patient's family for communication. This leads to miscommunication with the patient and misinformation from the family that may not be entirely truthful when answering questions involving medical history.
This also creates a problem when the nurses are trying to fully inform their patients before receiving their consent for a treatment. Nurses are liable is they don't received informed consent and communicating with non-verbal or non-English speaking patients can become increasing stressful for this reason.
Our team wanted address all of these problems while improving the efficiency and quality of life for these hard working, under staffed professionals.
## What does it do 🤔
Our application provides a system to help healthcare workers communicate with non-verbal and non-English speaking patients in a user friendly manner using real-time transcription and translating, graphical pain indicators, and visual symptom assessment surveys.
## How we built it 🔨
Using AssemblyAI + Google Translate API we created a real-time transcription and translating system that can cross the language barrier between the patient and the healthcare worker.
Interactive healthcare diagrams made using HTML and JavaScript designed to be simple and user-friendly were constructed to help patients visually communicate with professionals quickly.
All of the data is stored for later use in a secure hospital database to help keep track of patients' progress and make for easy data sharing between shifts of nurses.
## Challenges we ran into 🔎
Configuring the translations API and incorporating it into a browser environment proved to be very difficult when developing the backend. After hours of reading documentation, consulting mentors, and trying different approaches we finally got these tools to work seamlessly for our subtitle generator.
## Accomplishments that we're proud of 💪
We were able to implement real-time captioning using AssemblyAI and translation using the Google Translate API. We are also proud that we made a fully functioning web application in only 36 hours without the use of a framework. We think our program can provide real benefits to the healthcare industry.
## What we learned 🧠
We all learned how to use AssemblyAI and some of us learned JavaScript for the first time. We got to build on our UI development skills and refine our knowledge of databases.
## Looking Forward ⏩
We plan to implement foreign to English language translation to improve communication between the patient and nurse. With more time, we would have added functionality for nurses to customize the symptoms questionnaire and patient needs menu to further improve user experience.
|
## Inspiration
Amid the fast-paced rhythm of university life at Waterloo, one universal experience ties us all together: the geese. Whether you've encountered them on your way to class, been woken up by honking at 7 am, or spent your days trying to bypass flocks of geese during nesting season, the geese have established themselves as a central fixture of the Waterloo campus. How can we turn the staple bird of the university into a asset? Inspired by the quintessential role the geese play in campus life, we built an app to integrate our feather friends into our academic lives. Our app, Goose on the Loose allows you to take pictures of geese around the campus and turn them into your study buddies! Instead of being intimidated by the fowl fowl, we can now all be friends!
## What it does
Goose on the Loose allows the user to "capture" geese across the Waterloo campus and beyond by snapping a photo using their phone camera. If there is a goose in the image, it is uniquely converted into a sprite added to the player's collection. Each goose has its own student profile and midterm grade. The more geese in a player's collection, the higher each goose's final grade becomes, as they are all study buddies who help one another. The home page also contains a map where the player can see their own location, as well as locations of nearby goose sightings.
## How we built it
This project is made using Next.js with Typescript and TailwindCSS. The frontend was designed using Typescript React components and styled with TailwindCSS. MongoDB Atlas was used to store various data across our app, such as goose data and map data. We used the @React Google Maps library to integrate the Google maps display into our app. The player's location data is retrieved from the browser. Cohere was used to help generate names and quotations assigned to each goose. OpenAI was used for goose identification as well as converting the physical geese into sprites. All in all, we used a variety of different technologies to power our app, many of which we were beginners to.
## Challenges we ran into
We were very unfamiliar with Cohere and found ourselves struggling to use some of its generative AI technologies at first. After playing around with it for a bit, we were able to get it to do what we wanted, and this saved us a lot of head pain.
Another major challenge we underwent was getting the camera window to display properly on a smartphone. While it worked completely fine on computer, only a fraction of the window would be able to display on the phone and this really harmed the user experience in our app. After hours of struggle, debugging, and thinking, we were able to fix this problem and now our camera window is very functional and polished.
One severely unexpected challenge we went through was one of our computers' files corrupting. This caused us HOURS of headache and we spent a lot of effort in trying to identify and rectify this problem. What made this problem worse was that we were at first using Microsoft VS Code Live Share with that computer happening to be the host. This was a major setback in our initial development timeline and we were absolutely relieved to figure out and finally solve this problem.
A last minute issue that we discovered had to do with our Cohere API. Since the prompt did not always generate a response within the required bounds, looped it until it landed in the requirements. We fixed this by setting a max limit on the amount of tokens that could be used per response.
One final issue that we ran into was the Google Maps API. For some reason, we kept running into a problem where the map would force its centre to be where the user was located, effectively prohibiting the user from being able to view other areas of the map.
## Accomplishments that we're proud of
During this hacking period, we built long lasting relationships and an even more amazing project. There were many things throughout this event that were completely new to us: various APIs, frameworks, libraries, experiences; and most importantly: the sleep deprivation. We are extremely proud to have been able to construct, for the very first time, a mobile friendly website developed using Next.js, Typescript, and Tailwind. These were all entirely new to many of our team and we have learned a lot about full stack development throughout this weekend. We are also proud of our beautiful user interface. We were able to design extremely funny, punny, and visually appealing UIs, despite this being most of our's first time working with such things. Most importantly of all, we are proud of our perseverance; we never gave up throughout the entire hacking period, despite all of the challenges we faced, especially the stomach aches from staying up for two nights straight. This whole weekend has been an eye-opening experience, and has been one that will always live in our hearts and will remind us of why we should be proud of ourselves whenever we are working hard.
## What we learned
1. We learned how to use many new technologies that we never laid our eyes upon.
2. We learned of a new study spot in E7 that is open to any students of UWaterloo.
3. We learned how to problem solve and deal with problems that affected the workflow; namely those that caused our program to be unable to run properly.
4. We learned that the W store is open on weekend.
5. We learned one another's stories!
## What's next for GooseOnTheLoose
In the future, we hope to implement more visually captivating transitional animations which will really enhance the UX of our app. Furthermore, we would like to add more features surrounding the geese, such as having a "playground" where the geese can interact with one another in a funny and entertaining way.
|
# CareSync
## Inspiration
The idea for **CareSync** was born out of the need to address the healthcare challenges faced by underserved communities, especially in third-world countries. We were inspired by the potential of **technology to bridge gaps** in healthcare access, improve patient understanding of their conditions, and assist overworked doctors by automating the report generation process. We wanted to create a solution that would empower doctors and make medical information more accessible for patients.
## What it does
**CareSync** is a virtual doctor’s assistant that uses **voice recognition technology** to generate medical reports based on a doctor's spoken input. The system then automatically **simplifies complex medical jargon** into patient-friendly language. It also supports **multi-language translations** and cultural sensitivity to ensure the information is clear and easily understood by patients, regardless of their background. In addition, CareSync provides **customizable medical report templates** and highlights key details such as dosages and lifestyle recommendations.
## How we built it
We built CareSync using a combination of **Speech-to-Text APIs** for voice recognition and **Natural Language Processing (NLP)** to simplify medical terms. Our system was trained on **medical vocabulary datasets** to ensure it accurately captures and transcribes clinical information. Additionally, we integrated **multi-language support** to cater to non-English speaking regions. We created **custom templates** for medical reports to ensure flexibility and ease of use for healthcare providers.
## Challenges we ran into
* **Accurate Voice Recognition**: Ensuring that the voice recognition system accurately captures medical terms, especially in various accents or local dialects, was a key challenge.
* **Medical Term Simplification**: Striking the right balance between simplifying medical jargon and retaining its accuracy was difficult. We needed to ensure that the simplifications were still medically sound.
* **Cultural Sensitivity**: Adjusting the language and phrasing to be culturally relevant, while maintaining clarity, required ongoing refinement.
* **Multi-language Support**: Incorporating a variety of languages, especially those with limited technical resources, posed a significant challenge.
## Accomplishments that we're proud of
* Successfully integrating **medical-specific voice recognition** that accurately captures complex terms.
* Building an **NLP-based system** that simplifies complex medical jargon while maintaining clarity and accuracy.
* Implementing **multi-language support** for local communities, making CareSync accessible to a broader audience.
* Developing **customizable report templates** that streamline the process for doctors while improving the patient experience.
## What we learned
We learned that building technology for healthcare requires **precision, sensitivity, and user accessibility**. We gained a deeper understanding of **NLP** and how it can be leveraged to simplify complex terminology. Additionally, we learned how to integrate **cultural and language considerations** into technology to make it truly accessible for all.
## What's next for CareSync
Next, we plan to expand CareSync by adding **real-time doctor-patient interaction** where doctors can directly dictate patient notes during consultations. We also aim to further improve the **accuracy of medical term recognition** and add **more languages** to our platform. Our goal is to partner with healthcare organizations in developing countries to deploy CareSync and start making an impact in underserved communities.
|
partial
|
## The Gist
We combine state-of-the-art LLM/GPT detection methods with image diffusion models to accurately detect AI-generated video with 92% accuracy.
## Inspiration
As image and video generation models become more powerful, they pose a strong threat to traditional media norms of trust and truth. OpenAI's SORA model released in the last week produces extremely realistic video to fit any prompt, and opens up pathways for malicious actors to spread unprecedented misinformation regarding elections, war, etc.
## What it does
BinoSoRAs is a novel system designed to authenticate the origin of videos through advanced frame interpolation and deep learning techniques. This methodology is an extension of the state-of-the-art Binoculars framework by Hans et al. (January 2024), which employs dual LLMs to differentiate human-generated text from machine-generated counterparts based on the concept of textual "surprise".
BinoSoRAs extends on this idea in the video domain by utilizing **Fréchet Inception Distance (FID)** to compare the original input video against a model-generated video. FID is a common metric which measures the quality and diversity of images using an Inception v3 convolutional neural network. We create model-generated video by feeding the suspect input video into a **Fast Frame Interpolation (FLAVR)** model, which interpolates every 8 frames given start and end reference frames. We show that this interpolated video is more similar (i.e. "less surprising") to authentic video than artificial content when compared using FID.
The resulting FID + FLAVR two-model combination is an effective framework for detecting video generation such as that from OpenAI's SoRA. This innovative application enables a root-level analysis of video content, offering a robust mechanism for distinguishing between human-generated and machine-generated videos. Specifically, by using the Inception v3 and FLAVR models, we are able to look deeper into shared training data commonalities present in generated video.
## How we built it
Rather than simply analyzing the outputs of generative models, a common approach for detecting AI content, our methodology leverages patterns and weaknesses that are inherent to the common training data necessary to make these models in the first place. Our approach builds on the **Binoculars** framework developed by Hans et al. (Jan 2024), which is a highly accurate method of detecting LLM-generated tokens. Their state-of-the-art LLM text detector makes use of two assumptions: simply "looking" at text of unknown origin is not enough to classify it as human- or machine-generated, because a generator aims to make differences undetectable. Additionally, *models are more similar to each other than they are to any human*, in part because they are trained on extremely similar massive datasets. The natural conclusion is that an observer model will find human text to be very *perplex* and surprising, while an observer model will find generated text to be exactly what it expects.
We used Fréchet Inception Distance between the unknown video and interpolated generated video as a metric to determine if video is generated or real. FID uses the Inception score, which calculates how well the top-performing classifier Inception v3 classifies an image as one of 1,000 objects. After calculating the Inception score for every frame in the unknown video and the interpolated video, FID calculates the Fréchet distance between these Gaussian distributions, which is a high-dimensional measure of similarity between two curves. FID has been previously shown to correlate extremely well with human recognition of images as well as increase as expected with visual degradation of images.
We also used the open-source model **FLAVR** (Flow-Agnostic Video Representations for Fast Frame Interpolation), which is capable of single shot multi-frame prediction and reasoning about non-linear motion trajectories. With fine-tuning, this effectively served as our generator model, which created the comparison video necessary to the final FID metric.
With a FID-threshold-distance of 52.87, the true negative rate (Real videos correctly identified as real) was found to be 78.5%, and the false positive rate (Real videos incorrectly identified as fake) was found to be 21.4%. This computes to an accuracy of 91.67%.
## Challenges we ran into
One significant challenge was developing a framework for translating the Binoculars metric (Hans et al.), designed for detecting tokens generated by large-language models, into a practical score for judging AI-generated video content. Ultimately, we settled on our current framework of utilizing an observer and generator model to get an FID-based score; this method allows us to effectively determine the quality of movement between consecutive video frames through leveraging the distance between image feature vectors to classify suspect images.
## Accomplishments that we're proud of
We're extremely proud of our final product: BinoSoRAs is a framework that is not only effective, but also highly adaptive to the difficult challenge of detecting AI-generated videos. This type of content will only continue to proliferate the internet as text-to-video models such as OpenAI's SoRA get released to the public: in a time when anyone can fake videos effectively with minimal effort, these kinds of detection solutions and tools are more important than ever, *especially in an election year*.
BinoSoRAs represents a significant advancement in video authenticity analysis, combining the strengths of FLAVR's flow-free frame interpolation with the analytical precision of FID. By adapting the Binoculars framework's methodology to the visual domain, it sets a new standard for detecting machine-generated content, offering valuable insights for content verification and digital forensics. The system's efficiency, scalability, and effectiveness underscore its potential to address the evolving challenges of digital content authentication in an increasingly automated world.
## What we learned
This was the first-ever hackathon for all of us, and we all learned many valuable lessons about generative AI models and detection metrics such as Binoculars and Fréchet Inception Distance. Some team members also got new exposure to data mining and analysis (through data-handling libraries like NumPy, PyTorch, and Tensorflow), in addition to general knowledge about processing video data via OpenCV.
Arguably more importantly, we got to experience what it's like working in a team and iterating quickly on new research ideas. The process of vectoring and understanding how to de-risk our most uncertain research questions was invaluable, and we are proud of our teamwork and determination that ultimately culminated in a successful project.
## What's next for BinoSoRAs
BinoSoRAs is an exciting framework that has obvious and immediate real-world applications, in addition to more potential research avenues to explore. The aim is to create a highly-accurate model that can eventually be integrated into web applications and news articles to give immediate and accurate warnings/feedback of AI-generated content. This can mitigate the risk of misinformation in a time where anyone with basic computer skills can spread malicious content, and our hope is that we can build on this idea to prove our belief that despite its misuse, AI is a fundamental force for good.
|
## Inspiration
We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals.
## What it does
ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language.
## How we built it
We built the frontend UI using React, Javascript, HTML and CSS.
For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM.
Finally, for user authentication, we made use of Firebase.
## Challenges we faced + What we learned
When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug.
Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that.
## Accomplishments that we're proud of
This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay.
We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things.
## What's next for ReadRight
As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
|
## Inspiration
Viral content, particularly copyrighted material and deepfakes, has huge potential to be widely proliferated with Generative AI. This impacts artists, creators and businesses, as for example, copyright infringement causes $11.5 billion in lost profits within the film industry annually.
As students who regularly come across copyrighted material on social media, we know that manual reporting by users is clearly ineffective, and this problem lends itself well to the abilities of AI agents. A current solution by companies is to employ people to search for and remove content, which is time consuming and expensive. We are keen to leverage automatic detection through our software, and also serve individuals and businesses.
## What it does
PirateShield is a SaaS solution that automatically finds and detects videos that infringe a copyright owned by a user. We deploy AI agents to search online and flag content using semantic search. We also build agents to scrape this content and classify whether it is pirated, using comparisons to copyright licenses on Youtube. Our prototype focuses on the TikTok platform.
## How we built it:
Our platform includes AI agents built on Fetch.ai to perform automatic search and classification. This is split into a retrieval feature with semantic search, and a video classification feature. Our database is built with MongoDB to store videos and search queries.
Our frontend uses data visualisation to provide an analytics dashboard for the rate of True Positive classifications over time, as well as rates of video removal.
## Challenges we ran into
We initially considered many features for our platform, and had to distill this into a set of core prototype features. We were also initially unsure how we would implement the classification feature before deciding on using Youtube's database. Moreover, testing our agents end-to-end on queries involved much debugging!
## Accomplishments that we're proud of
As a team, we are proud of identifying this impactful problem to work on, and coordinating to implement a solution while meeting for the first time! In particular, we are proud of successfully building AI agents to search for and download videos, as well as classify them. We're excited to get our first users and deploy the remaining features of the platform.
## What we learned
Our tools used were Fetch.AI, Google APIs, fast RAG and MongoDB. We upskilled quickly in these frameworks, and also gained a lot from the advice of mentors and workshop speakers.
|
winning
|
## Inspiration
The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation.
A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths.
As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback.
## Purpose
Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve.
While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events.
After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback.
## Tech Stack
We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/).
[Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre.
Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices.
To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1).
## Challenges
It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input.
We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs.
We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary.
## Accomplishments
We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well.
Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely.
## What we learned
Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with.
We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis.
## Future of Speech Master
The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable.
Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust.
In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
|
## Inspiration
At companies that want to introduce automation into their pipeline, finding the right robot, the cost of a specialized robotics system, and the time it takes to program a specialized robot is very expensive. We looked for solutions in general purpose robotics and imagining how these types of systems can be "trained" for certain tasks and "learn" to become a specialized robot.
## What it does
The Simon System consists of Simon, our robot that learns to perform the human's input actions. There are two "play" fields, one for the human to perform actions and the other for Simon to reproduce actions.
Everything starts with a human action. The Simon System detects human motion and records what happens. Then those actions are interpreted into actions that Simon can take. Then Simon performs those actions in the second play field, making sure to plan efficient paths taking into consideration that it is a robot in the field.
## How we built it
### Hardware
The hardware was really built from the ground up. We CADded the entire model of the two play fields as well as the arches that hold the smartphone cameras here at PennApps. The assembly of the two play fields consist of 100 individual CAD models and took over three hours to fully assemble, making full utilization of lap joints and mechanical advantage to create a structurally sound system. The LEDs in the enclosure communicate with the offboard field controllers using Unix Domain Sockets that simulate a serial port to allow color change for giving a user info on what the state of the fields is.
Simon, the robot, was also constructed completely from scratch. At its core, Simon is an Arduino Nano. It utilizes a dual H Bridge motor driver for controlling its two powered wheels and an IMU for its feedback controls system. It uses a MOSFET for controlling the electromagnet onboard for "grabbing" and "releasing" the cubes that it manipulates. With all of that, the entire motion planning library for Simon was written entirely from scratch. Simon uses a bluetooth module for communicating offboard with the path planning server.
### Software
There are four major software systems in this project. The path planning system uses a modified BFS algorithm taking into account path smoothing with realtime updates from the low-level controls to calibrate path plan throughout execution. The computer vision systems intelligently detect when updates are made to the human control field and acquire normalized grid size of the play field using QR boundaries to create a virtual enclosure. The cv system also determines the orientation of Simon on the field as it travels around. Servers and clients are also instantiated on every part of the stack for communicating with low latency.
## Challenges we ran into
Lack of acrylic for completing the system, so we had to refactor a lot of our hardware designs to accomodate. Robot rotation calibration and path planning due to very small inconsistencies in low level controllers. Building many things from scratch without using public libraries because they aren't specialized enough.
Dealing with smartphone cameras for CV and figuring out how to coordinate across phones with similar aspect ratios and not similar resolutions.
The programs we used don't run on windows such as Unix Domain Sockets so we had to switch to using a Mac as our main system.
## Accomplishments that we're proud of
This thing works, somehow. We wrote modular code this hackathon and a solid running github repo that was utilized.
## What we learned
We got better at CV. First real CV hackathon.
## What's next for The Simon System
More robustness.
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
winning
|
## Inspiration
The inspiration for this project came from UofTHacks Restoration theme and Varient's project challenge. The initial idea was to detect a given gene mutation for a given genetic testing report. This is an extremely valuable asset for the medical community, given the current global situation with the COVID-19 pandemic. As we can already see, misinformation and distrust in the medical community continue to grow in popularity, thus we must try to leverage technology to solve this ever-expanding problem. One way Geneticheck can restore public trust in the medical community is by providing a way to bridge the gap between confusing medical reports and the average person's medical understanding.
## What it does
Geneticheck is a smart software that allows a patient or parents of patients with rare diseases to gather more information about their specific conditions and genetic mutations. The reports are scanned through to find the gene mutation and shows where the gene mutation is located on the original report.
Genticheck also provides the patient with more information regarding their gene mutation. Specifically, to provide them with the associated Diseases and Phenotypes (or related symptoms) they may now have. The software, given a gene mutation, searches through the Human Phenotype Ontology database and auto-generates a pdf report that lists off all the necessary information a patient will need following a genetic test. The descriptions for each phenotype are given in a laymen-like language, which allows the patient to understand the symptoms associated with the gene mutation, resulting in the patients and loved ones being more observant over their status.
## How we built it
Geneticheck was built using Python and Google Cloud's Vision API. Other libraries were also explored, such as PyTesseract, however, yielded lower gene detection results
## Challenges we ran into
One major challenge was initially designing the project in Python. Development in python was initially chosen for its rapid R&D capabilities and the potential need to do image processing in OpenCV. As the project was developed and Google Cloud Vision API was deemed acceptable for use, moving to a web-based python framework was deemed too time-consuming. In the interest of time, the python-based command line tool had to be selected as the current basis of interaction
## Accomplishments that we're proud of
One proud accomplishment of this project is the success rate of the overall algorithm, being able to successfully detect all 47 gene mutations with their related image. The other great accomplishment was the quick development of PDF generation software to expand the capabilities and scope of the project, to provide the end-user/patient with more information about their condition, ultimately restoring their faith in the medical field through a better understanding/knowledge.
## What we learned
Topics learned include OCR for python, optimizing images for text OCR for PyTesseract, PDF generation in python, setting up Flask servers, and alot about Genetic data!
## What's next for Geneticheck
The next steps include poring over the working algorithms to a web-based framework, such as React. Running the algorithms on Javascript would allow the user web-based interaction, which is the best interactive format for the everyday person. Other steps is to gather more genetic tests results and to provide treatments options in the reports as well.
|
## Inspiration
This project was inspired by Notion. As a person who gets distracted a lot, I decided to create a chrome extension that promotes productivity in a way that is easily accessible, and visually pleasing.
## What it does
The chrome extension is named after Notion, a popular productivity website. In the extension, users can create to-do lists, and look at weather and news stories in the area. Additionally, the user can customize the way the extension looks on their screen to ensure optimal usage.
## Challenges I ran into
An interesting new fact I learned about chrome extensions this weekend is that by default they do not store any data. This became difficult when programming the to-do list, what's the point of a to-do list if it's blank every time you open it? (seems counter-intuitive) This was the same for the settings to change the visuals as well as the location.
## Accomplishments that I'm proud of
While I've used 1 API before, I decided to use two different API's to bring this project to life. Both API's were significantly harder to use than my first experience, but after figuring it out once it became almost like muscle memory.
## What I learned
lots:
* chrome.storage
* git
* make sure the script you want is actually sourced before spending 3 hours debugging why it isn't working
## What's next for Pocket Notion
SO MUCH!! With more time, I would've loved to implement more ways to customize the extension to each's own taste, and possibly add more productivity tabs such as calendars, or reminders. Productivity could be further encouraged by linking the todo list with notion itself, or linking to things like google calendar. Additionally, the location input could've been more fool-proof by using another API to auto-fill the locations in the right format, or using real-time location to default to the user's location (this would require more permissions).
|
View presentation at the following link: <https://youtu.be/Iw4qVYG9r40>
## Inspiration
During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods.
Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take.
## What it does
Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing.
## How we built it
Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application.
## Challenges we ran into
The consistent challenge that we seemed to run into was the integration between our parts.
Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding.
## Accomplishments that we're proud of
During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end.
Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative.
## What we learned
Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies.
## What's next for Medisafe
Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc.
|
losing
|
## Inspiration
The purchase of goods has changed drastically over the past decade, especially over the period of the pandemic. Although with these online purchases comes a drawback, the buyer can not see the product in front of them before buying it. This adds an element of uncertainty and undesirability to online shopping as this can cost the consumer time and the seller money in processing returns, in fact, a study showed that up to 40% of all online purchases are returned, and out of those returned items just 30% were resold to customers, with the rest going to landfills or other warehouses.
With this app, we hope to reduce the number of returns by putting the object the user wants to buy in front of them before they buy it so that they know exactly what they are getting.
## What it does
Say you are looking to buy a tv but are not sure if it will fit or how it will look in your home. You would be able to open the Ecomerce ARena Android app and browse the TVs on Amazon(since that's where you were planning to buy the TV from anyways). You can see all the info that Amazon has on the TV but then also use AR mode to view the TV in real life.
## How we built it
To build the app we used Unity, coding everything within the engine using C#. We used the native AR foundation function provided and then built upon them to get the app to work just right. We also incorporated EchoAr into the app to manage all 3d models and ensure the app is lean and small in size.
## Challenges we ran into
Augmented Reality development was new to all of us as well as was the Unity engine, having to learn and harness the power of these tools was difficult and we ran into a lot of problems building and getting the desired outcome. Another problem was how to get the models for each different product, we decided for this Hackathon to limit our scope to two types of products with the ability to keep adding more in the future easily.
## Accomplishments that we're proud of
We are really proud of the final product for being able to detect surfaces and use the augmented reality capabilities super well. We are also really happy that we were able to incorporate web scraping to get live data from Amazon, as well as the echo AR cloud integration.
## What we learned
We learned a great deal about how much work and how truly amazing it is to get augmented reality applications built even for those who look simple on the surface. There was a lot that changed quickly as this is still a new bleeding-edge technology.
## What's next for Ecommerce ARena
We hope to expand its functionality to cover a greater variety of products, as well as supporting other vendors aside from Amazon such as Best Buy and Newegg. We can also start looking into the process for releasing the app into the Google app store, might even look into porting it to Apple products.
|
## Inspiration
We get the inspiration from the idea provided by Stanley Black&Decker, which is to show users how would the product like in real place and in real size using AR technic. We choose to solve this problem because we also encounter same problem in our daily life. When we are browsing website for buying furnitures or other space-taking product, the first wonders that we come up with is always these two: How much room would it take and would it suit the overall arrangement.
## What it does
It provides customer with 3D models of products which they might be interested in and enable the customers to place, arrange (move and rotate) and interact with these models in their exact size in reality space to help they make decision on whether to buy it or not.
## How we built it
We use iOS AR kit.
## Challenges we ran into
Plane detection; How to open and close the drawer; how to build 3D model by ourselves from nothing
## Accomplishments that we're proud of
We are able to open and close drawer
## What we learned
How to make AR animation
## What's next for Y.Cabinet
We want to enable the change of size and color of a series/set of products directly in AR view, without the need to go back to choose. We also want to make the products look more realistic by finding a way to add light and shadow to it.
|
## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless
|
partial
|
## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
|
## Inspiration
Natural disasters do more than just destroy property—they disrupt lives, tear apart communities, and hinder our progress toward a sustainable future. One of our team members from Rice University experienced this firsthand during a recent hurricane in Houston. Trees were uprooted, infrastructure was destroyed, and delayed response times put countless lives at risk.
* **Emotional Impact**: The chaos and helplessness during such events are overwhelming.
* **Urgency for Change**: We recognized the need for swift damage assessment to aid authorities in locating those in need and deploying appropriate services.
* **Sustainability Concerns**: Rebuilding efforts often use non-eco-friendly methods, leading to significant carbon footprints.
Inspired by these challenges, we aim to leverage AI, computer vision, and peer networks to provide rapid, actionable damage assessments. Our AI assistant can detect people in distress and deliver crucial information swiftly, bridging the gap between disaster and recovery.
## What it Does
The Garuda Dashboard offers a comprehensive view of current, upcoming, and past disasters across the country:
* **Live Dashboard**: Displays a heatmap of affected areas updated via a peer-to-peer network.
* **Drones Damage Analysis**: Deploy drones to survey and mark damaged neighborhoods using the Llava Vision-Language Model and generate reports for the Recovery Team.
* **Detailed Reporting**: Reports have annotations to classify damage types [tree, road, roof, water], human rescue needs, site accessibility [Can response team get to the site by land], and suggest equipment dispatch [Cranes, Ambulance, Fire Control].
* **Drowning Alert**: The drone footage can detect when it identifies a drowning subject and immediately call rescue teams
* **AI-Generated Summary**: Reports on past disasters include recovery costs, carbon footprint, and total asset/life damage.
## How We Built It
* **Front End**: Developed with Next.js for an intuitive user interface tailored for emergency use.
* **Data Integration**: Utilized Google Maps API for heatmaps and energy-efficient routing.
* **Real-Time Updates**: Custom Flask API records hot zones when users upload disaster videos.
* **AI Models**: Employed MSNet for real-time damage assessment on GPUs and Llava VLM for detailed video analysis.
* **Secure Storage**: Images and videos stored on Firebase database.
## Challenges We Faced
* **Model Integration**: Adapting MSNet with outdated dependencies required deep understanding of technical papers.
* **VLM Setup**: Implementing Llava VLM was challenging due to lack of prior experience.
* **Efficiency Issues**: Running models on personal computers led to inefficiencies.
## Accomplishments We're Proud Of
* **Technical Skills**: Mastered API integration, technical paper analysis, and new technologies like VLMs.
* **Innovative Impact**: Combined emerging technologies for disaster detection and recovery measures.
* **Complex Integration**: Successfully merged backend, frontend, and GPU components under time constraints.
## What We Learned
* Expanded full-stack development skills and explored new AI models.
* Realized the potential of coding experience in tackling real-world problems with interdisciplinary solutions.
* Balanced MVP features with user needs throughout development.
## What's Next for Garuda
* **Drone Integration**: Enable drones to autonomously call EMS services and deploy life-saving equipment.
* **Collaboration with EMS**: Partner with emergency services for widespread national and global adoption.
* **Broader Impact**: Expand software capabilities to address various natural disasters beyond hurricanes.
|
# RiskWatch
## Inspiration
## What it does
Our project allows users to report fire hazards with images to a central database. False images could be identified using machine learning (image classification). Also, we implemented methods for people to find fire stations near them. We additionally implemented a way for people to contact Law enforcement and fire departments for a speedy resolution. In return, the users get compensation from insurance companies. Idea is relevant because of large wildfires in California and other states.
## How we built it
We build the site from the ground up using ReactJS, HTML, CSS and JavaScript. We also created a MongoDB database to hold some location data and retrieve them in the website. Python was also used to connect the frontend to the database.
## Challenges we ran into
We initially wanted to create a physical hardware device using a Raspberry Pi 2 and a RaspiCamera. Our plan was to create a device that could utilize object recognition to classify general safety issues. We understood that performance would suffer greatly when going in, as we thought 1-2 FPS would be enough. After spending hours compiling OpenCV, Tensorflow and Protobuf on the Pi, it was worth it. It was surprising to achieve 2-3 FPS after object recognition using Google's SSDLiteNetv2Coco algorithm. But unfortunately, the Raspberry Pi camera would disconnect often and eventually fail due to a manufacturing defect. Another challenge we faced at the final hours was that our original domain choice was mistakenly marked available by the registry when it really was taken, but we eventually resolved it by talking to a customer support representative.
## Accomplishments that we're proud of
We are proud of being able to quickly get back on track after we had issues with our initial hardware idea and repurpose it to become a website. We were all relatively new to React and quickly transitioned from using Materialize.CSS at all the other hackathons we went to.
### Try it out (production)!
* clone the repository: `git clone https://github.com/dwang/RiskWatch.git`
* run `./run.sh`
### Try it out (development)!
* clone the repository: `git clone https://github.com/dwang/RiskWatch.git`
* `cd frontend` then `npm install`
* run `npm start` to run the app - the application should now open in your browser
* start the backend with `./run.sh`
## What we learned
Our group learned how to construct and manage databases with MongoDB, along with seamlessly integrating them into our website. We also learned how to make a website with React, making a chatbot, using image recognition and even more!
## What's next for us?
We would like to make it so that everyone uses our application to be kept safe - right now, it is missing a few important features, but once we add those, RiskWatch could be the next big thing in information consumption.
Check out our GitHub repository at: <https://github.com/dwang/RiskWatch>
|
winning
|
## Inspiration
In a world where education has become increasingly remote and reliant on online platforms, we need human connection **more than ever**. Many students often find it difficult to express their feelings without unmuting themselves and drawing unwanted attention. As a result, teachers are unaware of how their students are feeling and if the material is engaging. This situation is especially challenging for students who struggle with communicating their feelings–such as individuals with autism, selective mutism, social anxiety, and more.
We want to help **bridge this gap** by creating a tool that will both enable students to express themselves with less effort and enable teachers to understand and respond to their overall needs.
We strongly believe in the importance of **accessibility in education** and supplementing human connection, because at the end of the day, humans are all social beings.
## What it does
Our application helps measure the general emotions of participants in a video meeting, displaying a stream of emojis representing up to **80 different emotions**. We periodically sample video frames from all participants with their cameras on at 10-second intervals, feeding this data into **Hume’s Expression Management API** to identify the most prominent expressions. From this, we generate a composite view of the general sentiment using a custom weighted algorithm.
Using this aggregated sentiment data, our frontend displays the most frequent emotions with their corresponding emojis on the screen. This way, hosts can adapt their teaching to the general sentiment of the classroom, while students can share how they’re feeling without having to experience the social anxiety that comes with typing a message in the chat or sharing a thought out loud.
## How we built it
We leveraged **LiveKit** to create our video conference infrastructure and **Vercel** to deploy our application. We also utilized **Supabase Realtime** as our communication protocol, forwarding livestream data from clients per room and saving that data to Supabase Storage.
Our backend, implemented with **FastAPI**, interfaces with the frontend to pull this data from Supabase and feed the captured facial data into Hume AI to detect human emotions.
The results are then aggregated and stored back into our Supabase table. Our frontend, built with **Next.js** and styled with **Tailwind CSS**, listens to real-time event triggers from Supabase to detect changes in the table.
From this, we’re able to display the stream of emotions in **near real-time**, finally delivering aggregated emotion data as a light-hearted fun animation to keep everyone engaged!
## Challenges we ran into
* Livekit Egress has limited documentation
* Coordination of different parts using Supabase Realtime
* Hume AI API
* First-time Frontenders
* Hosting our backend thru Vercel (lots of config)
## Accomplishments that we're proud of
* Livekit real time streaming video conference
* Streaming video data to Hume Supabase Realtime
* Emoji animation using Framer Motion
* Efficient scoring algorithm using heaps
## What we learned
We learned how to use a lot of new tools and frameworks such as Next.js and Supabase as it was some of our members' first time doing full-stack software engineering. From our members all the way from SoCal and the East Coast, we learned how to ride the BART, and we all learned LiveKit for live streaming and video conferencing.
## What's next for Moji
We see the potential of this tool in a **wide variety of industries** and have other features in mind that we want to implement. For example, we can focus on enhancing this tool to help streamers with any kind of virtual audience by:
* Implementing a dynamic **checklist** that generates to-dos based on questions or requests from viewers.
This can benefit teachers in providing efficient learning to their studies or large entertainment streamers in managing a fast-moving chat. This can also be extended to eCommerce, as livestream shopping requires sellers to efficiently navigate their chat interactions.
* Using Whisper for **real-time audio speech recognition** to automatically check off answered questions.
This provides a hands-free way for streamers to meet their viewers’ requests without having to look extensively through chat. This is especially beneficial for the livestream shopping industry as sellers are typically displaying items while reading messages
* Using **RAG** to store answers to previously asked questions and using this data to answer any future questions.
This can be a great way to save time for streamers from answering repeated questions.
Enhancing video recognition capabilities to identify more complex interactions and objects in real-time.
With video recognition, we can lean even heavier into the eCommerce industry, identifying what type of products sellers are displaying and providing a hands-free and AI enhanced way of managing their checklist of requests.
* Adding **integrations** with other streaming platforms to broaden its applicability and improve the user experience.
The possibilities are endless and we’re excited to see where Moji can go! We hope that Moji can bring a touch of humanity and help us all stay connected and engaged in the digital world.
|
## Inspiration
After experiencing multiple online meetings and courses, a constant ending question that arose from the meeting hosts was always a simple "How are we feeling?" or "Does everybody understand?" These questions are often followed by a simple nod from the audience despite their true comprehension of the information presented. Ultimately, the hosts will move on from the content as from what they know, the audience has understood the content. However, for many of us, this is not the case because of the intense Zoom fatigue that overcomes us and ends up hindering our full comprehension of all the material. It is extremely important to allow teachers to gain a better understanding of the more realistic "vibe" of the audience in the meeting, and thus improve the overall presentation method of future meetings.
## What it does
Our website plays a role in allowing meeting hosts to analyze the audiences receptiveness to the content. The host would upload any meeting recording as a .mp4 file on our website. Our application will output a table with each individual’s name and the most occurring “emotion” for each individual during the meeting. Based on the results, the host would know how to acknowledge his/her group's concerns in the next meeting.
## How we built it
We utilized the Hume.AI API to allow us to do an analysis on the emotions of the individuals in the meeting. Utilizing the data the Hume.AI provided us we ran an analysis on the average emotions each meeting participant felt throughout the meeting. That data was processed in Python and sent to our frontend using Flask. Our frontend was built using React.js. We stored the uploaded video in Google Cloud.
## Challenges we ran into
From our team, two members had no experience in HTML, CSS, and JavaScript, so they spent a lot of time practicing web development. They faced issues along the way with the logic and implementation of the code for the user interface of our website. This was also our first time using the Hume.AI API and also our first time playing with Google Cloud.
## Accomplishments that we're proud of
Every team member successfully learned from each and learned a great deal from this hackathon. We were a team that had fun hacking together and built a reasonable MVP. The highlight was definitely learning since for half the team it was their first hackathon and they had very little prior coding exposure.
## What we learned
From our team, two of the members had very minimal experience with web development. By the end of the hackathon, they learned how to develop a website and eventually built our final website using ReactJS. The other 2 team members, relatively new to AI, explored and applied the HumeAI API for the first time, and learned how it can be applied to detect individual facial expressions and emotions from a video recording. We also were able to successfully connect frontend to backend for the first time using Flask and also used Google cloud storage for the first time. This hackathon marked a lot of firsts for the team!
## What's next for BearVibeCheck
We hope to further improve upon the UI and make our algorithm faster and more scalable. Due to the nature of the hackathon a lot of pieces were hard-coded. Our goal is to provide this resource to Cal Professors who teach in a hybrid or fully online setting to allow them to gauge how students are feeling about certain content material.
|
## Inspiration
Everyone can relate to the scene of staring at messages on your phone and wondering, "Was what I said toxic?", or "Did I seem offensive?". While we originally intended to create an app to help neurodivergent people better understand both others and themselves, we quickly realized that emotional intelligence support is a universally applicable concept.
After some research, we learned that neurodivergent individuals find it most helpful to have plain positive/negative annotations on sentences in a conversation. We also think this format leaves the most room for all users to reflect and interpret based on the context and their experiences. This way, we hope that our app provides both guidance and gentle mentorship for developing the users' social skills. Playing around with Co:here's sentiment classification demo, we immediately saw that it was the perfect tool for implementing our vision.
## What it does
IntelliVerse offers insight into the emotions of whomever you're texting. Users can enter their conversations either manually or by taking a screenshot. Our app automatically extracts the text from the image, allowing fast and easy access. Then, IntelliVerse presents the type of connotation that the messages convey. Currently, it shows either a positive, negative or neutral connotation to the messages. The interface is organized similarly to a texting app, ensuring that the user effortlessly understands the sentiment.
## How we built it
We used a microservice architecture to implement this idea
The technology stack includes React Native, while users' information is stored with MongoDB and queried using GraphQL. Apollo-server and Apollo-client are used to connect both the frontend and the backend.
The sentiment estimates are powered by custom Co:here's finetunes, trained using a public chatbot dataset found on Kaggle.
Text extraction from images is done using npm's text-from-image package.
## Challenges we ran into
We were unfamiliar with many of the APIs and dependencies that we used, and it took a long to time to understand how to get the different components to come together.
When working with images in the backend, we had to do a lot of parsing to convert between image files and strings.
When training the sentiment model, finding a good dataset to represent everyday conversations was difficult. We tried numerous options and eventually settled with a chatbot dataset.
## Accomplishments that we're proud of
We are very proud that we managed to build all the features that we wanted within the 36-hour time frame, given that many of the technologies that we used were completely new to us.
## What we learned
We learned a lot about working with React Native and how to connect it to a MongoDB backend. When assembling everyone's components together, we solved many problems regarding dependency conflicts and converting between data types/structures.
## What's next for IntelliVerse
In the short term, we would like to expand our app's accessibility by adding more interactable interfaces, such as audio inputs. We also believe that the technology of IntelliVerse has far-reaching possibilities in mental health by helping introspect upon their thoughts or supporting clinical diagnoses.
|
losing
|
## Inspiration
We were interested in data analytics and helping others through our knowledge of finance.
## What it does
AlgoTrainer is an educational tool for budding algorithmic traders and finance enthusiasts. Through AlgoTrainer, users are able to design and test algorithms to buy and sell stocks based on simulations from past historical data (pulled from Nasdaq's API). After running the simulations on the data, AlgoTrainer provides the user the option of viewing their results through different forms of graphical representations. The user is able to view how their algorithm responds to different market conditions as well as the changes to their portfolio overtime.
## How we built it
We built it using HTML, CSS, Javascript using data from the Nasdaq API.
## Challenges we ran into
We ran into the issue of figuring out how to manage all the data and visualize it.
## Accomplishments that we're proud of
Working really well together to complete an amazing product!
## What we learned
We learned a lot about working with JavaScript.
## What's next for AlgoTrainer
Add more data to educate users!
|
## Inspiration
As long time YuGiOh fans, we wanted to leverage AI technologies to bring any YuGiOh card idea we wanted to life. Thus, the AI YuGiOh Card Generator was born!
## What it does
Our application uses the OpenAI image generation and text classification APIs to create art, effects, and other details for a YuGiOh card based on the card name given by the user, and compile the results for display on our web interface.
## How we built it
We built our application by using React, Javascript, HTML, and CSS to create a frontend interface to access the OpenAI APIs and compose the resulting YuGiOh cards.
## Challenges we ran into
Our biggest challenges came in the forms of prompt engineering to get the best results we could from the OpenAI API, as well as using CSS to style the generated cards appropriately with all the card details.
## Accomplishments that we're proud of
Our biggest achievement is being able to create YuGiOh cards that have realistic effects, details, and artwork, that could be even used in a real game!
## What we learned
We learned how to implement APIs into a React project as well as do complex image composition using HTML and CSS.
## What's next for AI YuGiOh Card Generator
Next steps for our generator would be to implement generating other types of cards, such as spells, traps, and extra deck monsters!
|
## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources.
|
losing
|
## Inspiration
The inspiration comes from a perennial need for support from your friends and family who might not always be available when you feel like you want to feel their support the most. We are meeting this need by allowing friends and family to send videos to the app users, (at any point of time) who will receive a lovely video "care package" after journaling a message about their day, how they feel, or anything at all.
## What it does
CarePak is an app that utilizes NLP and speech recognition technologies to match user-inputted text to video messages' transcripts. This matching reflects automatic sentimental detection and analysis that best serves the user's needs.
## How we built it
We built the back-end systems using sophisticated NLP and speech recognition technologies. User inputted text was analyzed using sentiment analysis and keyword extraction, and was in turn matched with video transcripts (which were obained using Google Cloud-based speech-to-text technology) using semantic similarity. User input (text and video streams) are passed through Flask and Restful API-based server communication between backend Python script, MySQL database and frontend interface to ensure dynamic reflection of algorithm results.
## Challenges we ran into
One of the major issues we ran into on the back-end was perfecting the speech recognition in the presence of noise, as well as implementing a recommendation system that takes into account the scope of ambiguity present in human language. On the front-end, we learned Swift and SQL to implement a lightweight iOS-based app that interactively gauges user input and dynamically adjusts layout based on user need. Perhaps the biggest challenge was to figure out the puzzle of connecting multiple pieces across a variety of platforms to make everything run smoothly.
## Accomplishments that we're proud of
We're really proud of the way our speech recognition and NLP algorithms turned out, and we're super excited that our UI feels sleek and user-friendly. SQL connection makes the iOS app run smoothly, as backend and frontend communication occurs.
## What we learned
As many of our team members are international students, we are used to the pain of not seeing families and friends. We learned an incredible amount about how to make an awesome scalable product into reality. We learned to base our actions on users' true needs and turned the vision into reality.
## What's next for CarePak
|
## Inspiration
As students, we are constantly needing to attend interviews whether for jobs or internships, present in class, and speak in front of large groups of people. We understand the struggle of feeling anxious and unprepared, and wished to create an application that would assist us in knowing if we are improving and ready to present.
## What it does
Our application is built around tracking applications such as body language, eye movement, tone of voice, and intent of speech. With the use of analyzers, Speakeasy summarizes and compiles results along with feedback on how to improve for your next presentation. It also grants the ability to learn how your presentations come across to others. You can see how clear and concise your speech is, edit your speech's grammar and spelling on the fly, and even generate a whole new speech if your intended audience changes.
## How we built it
We used LLP models from Cohere to help recognize speech patterns. We used prompt engineering to manufacture summary statements, grammar checks, and changes in tone to deliver different intents. The front end was built using React and Chakra UI, while the back-end was created with Python. The UI was designed using Figma and transferred on to Chakra UI.
## Challenges we ran into
There were numerous challenges we ran into in the creation of this application. With it we learned to quickly adapt and adjust our plans. Originally we planned to utilize React Native, however, were faced with installation issues, OS complications, and connectivity between devices. We then pivoted to Flutter, however we encountered many bugs within an open source package. Finally, we were able to move to React where we successfully created our application. For the backend, one of the biggest challenges was guessing the appropriate prompts to generate the results we need.
## Accomplishments that we're proud of
We are proud of our ability to adapt and react to a multitude of difficulties. As half of our team are first-time hackers, we are proud to have been able to produce a product that our whole team is satisfied with. This type of idea-making, app development process, and final product creation are experiences we had never encountered beforehand.
## What we learned
We learned how to think creatively in difficult and time-crucial situations. As well, how to design using Figma and Chakra UI. We also learned from each other as teammates on how to collaborate, generate ideas, and develop a product we are proud of.
## What's next for Speakeasy
We hope to both add on to and improve our features as well as making it accessible for people around the world who speak different languages. We look forward to continuing to develop our tone analyzers and speech intent translators to increase accuracy. Along with strengthening our body language trackers and connecting it with databases to better analyze the meaning behind movements and how it conveys across audiences.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
losing
|
## Inspiration
We noticed that Google Translate, although it provides a tool to take photos and translate it, is sometimes inconvenient to use. We wanted to fix that and remove more communication barriers.
## What it does
You can take a photo of text and it reads the text and provides the option to translate it as well.
## How I built it
We used an open source mobile vision project we found online and then integrated Bing Translate. This was all done in Android Studio so it is built natively for the Android Platform.
## Challenges I ran into
We originally wanted to use Microsoft's Project Oxford for mobile vision, however there were problems with documentation along the way. We spent all but the last 5 hours trying to figure out this issue, talking to Microsoft engineers, and still couldn't figure it out.
## Accomplishments that I'm proud of
We learned how to use APIs. We hadn't done that before. Although we couldn't figure out how to get Project Oxford to work, it was a great feeling knowing we were trying so hard.
## What I learned
We learned how to use APIs, got an idea of how mobile vision works, and got better overall with the Android platform. In specific, using different threads for different tasks and JSON objects.
## What's next for Mobile Vision Translator
We hope to add voice functionality and the ability to autodetect the language in a photo. That was our original aim, and implementing Project Oxford would have allowed us to do that.
|
## Inspiration
We were inspired by the theme of the hackathon; nostalgia. We thought that aside from smell, sight would be the most nostalgic sense to target and what more nostalgic than a photo album? Because phones are more common than ever, we tried to capture the nostalgia of a photo album in a modern way by turning the camara roll into a personalized photo album.
## What It Does
Select the photos you want to be used for one/multiple albums, then select the categories you want to use as filters for the photo album. Such as the filtering by the time period the photo was taken, the city it was taken in, the perceived emotion of the image, and by being able to differentiate people. Then the app generates a photo album using set templates that is custom tailored to you.
## How We Built It
We used expo go for rapid mobile app development and react-native. We also tried to use Auth0 for login authentication, and we managed to use Kintone for database storage. Then using image metadata and Python's deepface library, the pictures were sorted to match the categories chosen and a webpage was made to display the final photo album.
## Challenges We Ran Into
All of us had windows computers and iphones which turned out to be a really bad combination for mobile app development and expo go. Windows can't use testflight and iphones can't download apks so it was near impossible to build and test development builds and features during the hackathon, without a paid developer subscription. Especially being new to app development. We also ran into issues with retrieving IOS photo metadata, where you could not access it externally due to IOS security. Which proved to be a problem as we could not use Expogo to retrieve Apple's specific photo metadata, which only Swift was found to work. As a work around we found that by accessing the photos stored within the icloud folder would allow us to access the Apple metadata, but as soon as the file was moved outside of the icloud folder the Apple metadata would be removed. Resulting in a huge stop in the automation and integration of our backend to the front end app.
## Accomplishments That We're Proud Of
We are proud of the amount which we have learned through the immense amount of troubleshooting done this weekend. In specific about full app development and the interactions between the front and back ends. The skills and experiences gained in project communication and planning before beginning development. As well as the fact that both our app and backend codes worked as intended with it being unfortunate that we did not know of the language compatibility issues for IOS mobile development before finishing both codes and attempting to integrate.
## What We Learned
Front-end frameworks such as JavaScript, CSS, HTML, React-Native, Kitone, deepface, and Python. We also learned back-end frameworks such as Django, and Flask.
However, one of the most important things we learned was the importance of very careful project planning when working on a team project with not a lot of time to work with. Ensuring that all of the relevant overhead research is done on the languages, frameworks and API's before beginning development.
## What's Next For MyPhotoAlbum App
For this app we have a lot in mind for the future. Some of these ideas include:
* Redeveloping the app in Swift when we get access to means to simulate the app.
* Fully integrating the app flow as intended.
* Adding more features such as our developed but un-integrated facial comparison feature which determines and filters by people that are within the selected photos, as well as using a vision model to determine similar and key background objects such as a beach or mountain.
* Adding a timeline filter which takes one person and arranges the photo album in chronological order, to show the development of the selected person.
* Adding Video functionality to the photo album.
|
## Inspiration
In a world in which we all have the ability to put on a VR headset and see places we've never seen, search for questions in the back of our mind on Google and see knowledge we have never seen before, and send and receive photos we've never seen before, we wanted to provide a way for the visually impaired to also see as they have never seen before. We take for granted our ability to move around freely in the world. This inspired us to enable others more freedom to do the same. We called it "GuideCam" because like a guide dog, or application is meant to be a companion and a guide to the visually impaired.
## What it does
Guide cam provides an easy to use interface for the visually impaired to ask questions, either through a braille keyboard on their iPhone, or through speaking out loud into a microphone. They can ask questions like "Is there a bottle in front of me?", "How far away is it?", and "Notify me if there is a bottle in front of me" and our application will talk back to them and answer their questions, or notify them when certain objects appear in front of them.
## How we built it
We have python scripts running that continuously take webcam pictures from a laptop every 2 seconds and put this into a bucket. Upon user input like "Is there a bottle in front of me?" either from Braille keyboard input on the iPhone, or through speech (which is processed into text using Google's Speech API), we take the last picture uploaded to the bucket use Google's Vision API to determine if there is a bottle in the picture. Distance calculation is done using the following formula: distance = ( (known width of standard object) x (focal length of camera) ) / (width in pixels of object in picture).
## Challenges we ran into
Trying to find a way to get the iPhone and a separate laptop to communicate was difficult, as well as getting all the separate parts of this working together. We also had to change our ideas on what this app should do many times based on constraints.
## Accomplishments that we're proud of
We are proud that we were able to learn to use Google's ML APIs, and that we were able to get both keyboard Braille and voice input from the user working, as well as both providing image detection AND image distance (for our demo object). We are also proud that we were able to come up with an idea that can help people, and that we were able to work on a project that is important to us because we know that it will help people.
## What we learned
We learned to use Google's ML APIs, how to create iPhone applications, how to get an iPhone and laptop to communicate information, and how to collaborate on a big project and split up the work.
## What's next for GuideCam
We intend to improve the Braille keyboard to include a backspace, as well as making it so there is simultaneous pressing of keys to record 1 letter.
|
losing
|
## Inspiration
Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders.
## What it does
Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout.
## How we built it
The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive.
## Challenges we ran into
>
> I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry*
>
>
>
## Accomplishments that we're proud of
>
> This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry*
>
>
>
## What we learned
>
> Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry*
>
>
>
## What's next for Quick Quote
Things that are on our roadmap to improve Quick Quote:
* Apply google analytics to track user's movement and collect feedbacks to enhance our UI.
* Enhance our neural network model to enrich our knowledge base.
* Train our data with more evalution to give more depth
* Includes ads (mostly auto companies ads).
|
## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
|
## 💡 Inspiration
You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?”
If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies.
We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit.
## 🔍 What it does
First, our AI-powered summarization engine creates a set of live notes based on the current lecture.
Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos!
Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind.
## ⭐ Feature List
* Dashboard with all your notes
* Summarizes your lectures automatically
* Select/Highlight text from your online lectures
* Organize your notes with intuitive UI
* Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime
* Text simplification, definitions, and synonyms anywhere on the web
* DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster.
## ⚙️ Our Tech Stack
* Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API,
* Web Application: Chakra UI + React.js, Next.js, Vercel
* Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js
* Infrastructure: Firebase/Firestore
## 🚧 Challenges we ran into
* Completing our project within the time constraint
* There was many APIs to integrate, making us spend a lot of time debugging
* Working with Google Chrome Extension, which we had never worked with before.
## ✔️ Accomplishments that we're proud of
* Learning how to work with Google Chrome Extensions, which was an entirely new concept for us.
* Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use.
## 📚 What we learned
* The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out!
* Working on a project where you can relate helps a lot with motivation
* Chakra UI is legendary and a lifesaver
* The Chrome Extension API is very difficult, did we mention that already?
## 🔭 What's next for AcadeME?
* Implementing a language translation toggle to help international students
* Note Encryption
* Note Sharing Links
* A Distributive Quiz mode, for online users!
|
winning
|
# Py
Py is a interaction tool for new users of python to learn what a line of code does. This tool allows you to enter a single line of a code and get a human readable debrief. It's so easy it is being thrown in your face!
## Installing
Clone the repository and have python V3.5.1 or greater installed.
## Running
Navigate to the base directory of the repository, and to run use the following command:
`python app.py`
This will open the UI for the app.
## UI
[](https://postimg.org/image/5m1tdha9f/)
|
## Inspiration
We were inspired by esoteric languages such as Brainfuck, and by exotic syntax like that of Common Lisp, to invent a language that would cause its user to develop a debilitating headache after 10 minutes of coding.
## What it does
Pa!n is an interpreted programming language with one major caveat; it maps all operators to capital letters in the alphabet, and then rotates the alphabet one position left every time an operator is used. As an example, if one wishes to add numbers multiple times, first they would have to use 'A' (assuming no operators were used beforehand), then 'B', then 'C', etc. This creates a uniquely challenging programming experience, as the programmer has to keep track of the current offset, and adding/removing preceding lines of code ensures one has to modify all lines after it. Also, we inverted brackets so that ')' is instead an opening bracket, just to cause even more pa!n.
## How we built it
We decided to start development in Python, because prototyping in Python is simple compared to other languages. To create pa!n, we began by concurrently developing an expression tree module, and a tokenizer/parser module. This allowed for faster development as each person focused on one part only. The two sides were then combined, so that the parser read a script and returned a complete expression tree.
## Challenges we ran into
Designing a text parser is incredibly difficult since there are many avenues for bugs and corner cases, and computers don't have the luxury of language processing and natural big picture thinking like humans do. As such, the parser we have been able to create in 1 1/2 days is very rudimentary, with little to no error handling. Also, while Python is very powerful for development, it comes with its own challenges. For instance, assignment to a list index would be most efficiently implemented with a generic reference to a mutable object. However, many objects in Python are immutable, such as integers. Thus, a list would need an explicit reference type that holds the list and the index. This became a limitation, and is why we don't have lists yet.
## Accomplishments that we're proud of
We are proud of the fact that we were able to come together and design a real programming language from scratch in 1 1/2 days. While parsing is flimsy and the language lacks many features, it makes for a foundation that opens a plethora of possibilities, and serves as a learning experience for everybody involved.
## What we learned
A major lesson from this is that error handling at the very beginning may be annoying, but it pays off in the end. This is because we designed the parser with the idea that error handling would be added later. The only problem is, now we're left with a parser that throws exceptions at so much as a missed bracket, and adding error handling in the time we had left was a daunting task, since it would require a whole new framework to catch and display the errors.
## What's next for pa!n
After this event, we will continue to develop the language and redesign the parser. Porting to C/C++ is the next logical step, for faster processing and detachment from Python's already high level libraries, and maybe even evolution into an actual practical language.
Built with love by
birb#2137
ICBk#5208
syrbor#1686
Find us on Discord!
|
## Inspiration
Self-motivation is hard. It’s time for a social media platform that is meaningful and brings a sense of achievement instead of frustration.
While various pro-exercise campaigns and apps have tried inspire people, it is difficult to stay motivated with so many other more comfortable distractions around us. Surge is a social media platform that helps solve this problem by empowering people to exercise. Users compete against themselves or new friends to unlock content that is important to them through physical activity.
True friends and formed through adversity, and we believe that users will form more authentic, lasting relationships as they compete side-by-side in fitness challenges tailored to their ability levels.
## What it does
When you register for Surge, you take an initial survey about your overall fitness, preferred exercises, and the websites you are most addicted to. This survey will serve as the starting point from which Surge creates your own personalized challenges: Run 1 mile to watch Netflix for example. Surge links to your phone or IOT wrist device (Fitbit, Apple Watch, etc...) and, using its own Chrome browser extension, 'releases' content that is important to the user when they complete the challenges.
The platform is a 'mixed bag'. Sometimes users will unlock rewards such as vouchers or coupons, and sometimes they will need to complete the challenge to unlock their favorite streaming or gaming platforms.
## How we built it
Back-end:
We used Python Flask to run our webserver locally as we were familiar with it and it was easy to use it to communicate with our Chrome extension's Ajax. Our Chrome extension will check the URL of whatever webpage you are on against the URLs of sites for a given user. If the user has a URL locked, the Chrome extension will display their challenge instead of the original site at that URL. We used an ESP8266 (onboard Arduino) with an accelerometer in lieu of an IOT wrist device, as none of our team members own those devices. We don’t want an expensive wearable to be a barrier to our platform, so we might explore providing a low cost fitness tracker to our users as well.
We chose to use Google's Firebase as our database for this project as it supports calls from many different endpoints. We integrated it with our Python and Arduino code and intended to integrate it with our Chrome extension, however we ran into trouble doing that, so we used AJAX to send a request to our Flask server which then acts as a middleman between the Firebase database and our Chrome extension.
Front-end:
We used Figma to prototype our layout, and then converted to a mix of HTML/CSS and React.js.
## Challenges we ran into
Connecting all the moving parts: the IOT device to the database to the flask server to both the chrome extension and the app front end.
## Accomplishments that we're proud of
Please see above :)
## What we learned
Working with firebase and chrome extensions.
## What's next for SURGE
Continue to improve our front end. Incorporate analytics to accurately identify the type of physical activity the user is doing. We would also eventually like to include analytics that gauge how easily a person is completing a task, to ensure the fitness level that they have been assigned is accurate.
|
losing
|
## Inspiration
After playing laser tag as kids and recently rediscovering it as adults but not having any locations near us to play, our team looked for a solution that wasn't tied down to a physical location. We wanted to be able to play laser tag anywhere, anytime!
## What it does
Quick Connect Laser Tag allows for up to 20 players to pick up one of our custom-made laser blaster and sensor device, and allows players to engage in a fun game of laser tag in 2 different game modes!
Game Mode 1: Last One Standing
In Last One Standing, all players have 10 lives. The last player with remaining lives is the winner
Game Mode 2: Time Deathmatch
In Time Deathmatch. A game duration is set, and the player with the most eliminations at the end of the game is the winner.
## How we built it
We 3D printed a housing for the laser blaster which contains an ESP32, laser transmitter, and laser sensor. We designed the laser blasters to communicate with each other during gameplay over ESP32Now peer-to-peer communication to relay point scoring data. When the laser sensor detects a hit, it can send an acknowledgement to the blaster that made the hit in order to correctly assign points to players. We worked with an OLED display over I2C to show the user their remaining lives or points scored and game time remaining, depending on the game mode.
## Challenges we ran into
To implement game mode two, we initially tried to use the HDK Android development board to act as a server that could communicate with the individual laser guns to tally point totals, however we were unable to get the board to work with WiFi over the eduroam network, or enable Direct WiFi to the laser blasters.
## Accomplishments that we're proud of
We're proud that when we were unable to work with the HDK Android development board, we were able to pivot to a different technology that could enable us to get our blasters to communicate. We'd never used the ESPNow communication protocol, and successfully used it to connect a theoretical limit of 20 blaster over a field range of 200 yards!
## What we learned
* ESPNow using MAC addresses
+ Asynchronously receiving packets
* 3D Printing
* Working with OLED displays
* Using lasers to trigger sensors over long distances
## What's next for Quick Connect Laser Tag
* Building more blasters to support more players
* Custom PCB
* Improving blaster housing
|
## Inspiration
During the pandemic, we found ourselves sitting down all day long in a chair, staring into our screens and stagnating away. We wanted a way for people to get their blood rushing and have fun with a short but simple game. Since we were interested in getting into Augmented Reality (AR) apps, we thought it would be perfect to have a game where the player has to actively move a part of their body around to dodge something you see on the screen, and thus Splatt was born!
## What it does
All one needs is a browser and a webcam to start playing the game! The goal is to dodge falling barrels and incoming cannonballs with your head, but you can also use your hands to "cut" down the projectiles (you'll still lose partial lives, so don't overuse your hand!).
## How we built it
We built the game using JavaScript, React, Tensorflow, and WebGL2. Horace worked on the 2D physics, getting the projectiles to fall and be thrown around, as well as working on the hand tracking. Thomas worked on the head tracking using Tensorflow and outputting the necessary values we needed to be able to implement collision, as well as the basic game menu. Lawrence worked on connecting the projectile physics and the head/hand tracking together to ensure proper collision could be detected, as well as restructuring the app to be more optimized than before.
## Challenges we ran into
It was difficult getting both the projectiles and the head/hand from the video on the same layer - we had initially used two separate canvasses for this, but we quickly realized it would be difficult to communicate from one canvas to another without causing too many rerenders. We ended up using a single canvas and after adjusting how we retrieved the coordinates of the projectiles and the head/hand, we were able to get collisions to work.
## Accomplishments that we're proud of
We're proud about how we divvy'd up the work and were able to connect everything together to get a working game. During the process of making the game, we were excited to have been able to get collisions working, since that was the biggest part to make our game complete.
## What we learned
We learned more about implementing 2D physics in JavaScript, how we could use Tensorflow to create AR apps, and a little bit of machine learning through that.
## What's next for Splatt
* Improving the UI for the game
* Difficulty progression (1 barrel, then 2 barrels, then 2 barrels and 1 cannonball, and so forth)
|
## Inspiration
Looking around at the younger generations can be saddening. Everyone is so attached to their phones, needing to be connected. Needing to update their status, needing to update their followers, needing to send questionable images on SnapChat. Being **so** connected can remove you from reality and all that it has to offer. If we'd just put down our phones for only a moment, we'd all see how awesome lasers are. They're like seriously cool.
## What it does
So we set out trying to make a file transfer mechanism that sends files via lasers. I know, COOL, RIGHT? Well, that didn't really work so we ended up making an instant messaging platform via lasers. Until that didn't work either so we made a "one-way, only really short sentences" transmitter via lasers.
## How we built it
Lasers.
## Challenges we ran into
Turns out laser based file transfers are not as practical as you'd think. A+ for style points, but it takes about a minute to send a word. And the receiver has to at least be within a line of sight and close enough that light dispersion doesn't affect the goods. There were so many challenges to this one. Syncing the clocks are a nightmare, reading and writing out of the same serial port is tough and Arduinos have less memory than that fish from Finding Nemo that has difficulties remembering things... what's her name again?
## Accomplishments that we're proud of
Lasers.
## What we learned
Lasers. And what they shouldn't be used for.
## What's next for Lazier Laser
Oh, this project is so retired.
|
partial
|
## Inspiration
We want to make everyone impressed by our amazing project! We wanted to create a revolutionary tool for image identification!
## What it does
It will identify any pictures that are uploaded and describe them.
## How we built it
We built this project with tons of sweats and tears. We used Google Vision API, Bootstrap, CSS, JavaScript and HTML.
## Challenges we ran into
We couldn't find a way to use the key of the API. We couldn't link our html files with the stylesheet and the JavaScript file. We didn't know how to add drag and drop functionality. We couldn't figure out how to use the API in our backend. Editing the video with a new video editing app. We had to watch a lot of tutorials.
## Accomplishments that we're proud of
The whole program works (backend and frontend). We're glad that we'll be able to make a change to the world!
## What we learned
We learned that Bootstrap 5 doesn't use jQuery anymore (the hard way). :'(
## What's next for Scanspect
The drag and drop function for uploading iamges!
|
## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
|
## Inspiration
## What it does
Generates relevant, realtime hashtags from just an image using Google Cloud's Vision API's image recognition technology for use on any social media platform.
## How we built it
The general process is 1) send the selected image to multiple AI image-recognition service providers (Google Vision and Imagga), 2) the backend server receives the data from the providers, aggregates the results based on our algorithm that filters the "words" by context, 3) finally, all the selected "words" are transferred over to the front-end and is displayed accordingly on the app.
## Challenges we ran into
Angular deployment to server was problematic due to some compression issue. Also, most of the social APIs that we were going to use (i.e. Facebook, Twitter, Instagram) were either limited, or were tied to deprecated services that hindered us from getting richer and more accurate data for the users.
## Accomplishments that we're proud of
We were able to come up with the idea and FOCUS on what mattered (core function of the app) and what did not (using other APIs). This saved us time and allowed us to streamline our process of creating the app all the way from the front to the back. Also, linking up all the APIs that were a first-time to us really was an accomplishment for us.
## What we learned
We learned that the Google Vision API is really powerful. Having said that it wasn't enough to provide more rich data that had context, thus, looking into Imagga as well. We also learned that however many times we plan things, time is always against us, and we need to efficiently manage the time to produce a viable working product.
## What's next for InsTags
|
winning
|
## Inspiration
The intention of this project was to make use of The Weather Network API in a creative fashion.
This was done through the use of the developer tools provided with Amazon Alexa. A custom skill set
that can be invoked using "Alexa, ask The Weather Network <command>" was created. The inspiration for this project was the necessity for weather information in remote areas when on hiking, skiing, etc. trips. By making use of Amazon Alexa and Twilio, you can now request an SMS to your phone with the next weeks weather information before leaving on your trip. This allows for a greater interactive world while also addressing an issue both of the team members have had to experience in the past.
## What it does
The skill set provides basic weather information for the current location currently or on a future date. Additionally
the functionality was extended to provide recommendations on the type of transportation to use when travelling
to work or school. The system can also send individuals an SMS with relevant weather information with the
simple use of a voice command.
### Command Types
* Current Weather Information - Example: What is the weather right now?
* Weather Information by date - Example: What is the weather in Kingston Ontario Canada on Thursday?
* Transportation Method - Example: Should I bike to work today?
* Text Message - Example: Send a text message to Mitchell with weather for Kingston Ontario Canada for tomorrow.
* Champions - Example: What team made the best use of The Weather Network API?
## How we built it
* Amazon Developer Console : <https://developer.amazon.com/>
+ Used Amazon Alexa Skills Kit to add new custom skills to Alexa for interacting with Amazon AWS Lambda instance.
* Amazon AWS Lambda : <https://console.aws.amazon.com/lambda/>
+ Used AWS Lambda function to host python code required for backend processing of the Alexa commands.
+ The python code is also responsible for calling The Weather Network API and interacting with Twilio.
* Twilio : <https://www.twilio.com>
+ Used Twilio python module for sending text messages to user with weather information.
## Challenges we ran into
Originally tried to use STDLIB with NodeJS as our backend, however ran into issues integrating with REST api calls and team's inexperience with node js. Minor functionality was possible but a decision was made to move to Amazon Lambda using the python scripting language for all backend processing. Both team members have previous python experience and it was found that generally more libraries were available for python than node js for external integrations.
An additional problem we ran into while trying to incorporate multimedia messages was the use of python libraries in different python versions. The code was successful on our team's local machines but failed when run on the Amazon AWS Lambda server. Due to time constraints multimedia messages were removed from the project as addressing this issue had become very time consuming.
## Accomplishments that we're proud of
Being able to integrate 3 platforms (Alexa, Weather Network API, and Twilio) to allow a user to send weather information via SMS with a simple voice command before leaving the house for the day.
## What we learned
* Amazon Developer Console
* Amazon AWS Lambda Functions
* REST API calls
* Twilio SMS services
* Python Libraries
## What's next for EchoWeather
* MMS Trend Graphs
+ Allow users to send trend graphs such as wind speed, min and max temperature, snow accumulation etc. for the next 7 days to a user's cell phone. This expands on the current SMS capability to allow MMS messages. Where valuable statistics can be presented in graphical format.
|
# visuaLAG
(*visual Language Agnostic Game*)
## Problem
LeetCode is an almost ubiquitous platform for honing algorithmic
skills. However, grinding LeetCode problems is tedious and boring; the
only stimulating feedback are the green checkmarks appearing when
testcases pass.
## Idea
What if we could visualize the execution of a LeetCode problem? This
would provide a more engaging and stimulating experience for the
user. It would also provide a more intuitive way to understand the
inner workings of an algorithm.
## Solution
visuaLAG (visual Language Agnostic Game) is command-line application
that helps visualize and gamify the execution of LeetCode problems.
We structure stages pedagogically, so users can gradually learn
algorithmic techniques while taking on more challenging problems.
Users are given provided template files in a language of their choice,
where they must then read the problem statement and implement the
solution. Running the game then runs the user's code, passing it
input, and collecting their output, then visualizing the execution of
their algorithm. If the user has written incorrect code, then they can
easily see where they went wrong, and what mistakes their algorithm
has.
## Tech Stack
We used Rust and the Bevy game engine to implement the visualizations
of each stage, taking in the user code's output and displaying it in
an intuitive way. The Rust code is contained in the `game/` directory.
We used Racket to implement the user-facing command-line interface,
which allows users to generate templates for a chosen stage and
language, and run their code that is then visualized. The Racket code
is contained in the `gameio/` directory.
## Usage
### Pre-requisites
* [Rust](https://www.rust-lang.org/tools/install) and
[Cargo](https://doc.rust-lang.org/cargo/getting-started/installation.html)
* [Racket](https://racket-lang.org/download/) and
[raco](https://docs.racket-lang.org/raco/)
### Installation
In the `game/` directory:
```
$ cargo build --release
$ mv target/release/game ../gameio/
```
In the `gameio/` directory:
```
$ raco pkg install --auto
$ raco exe main.rkt
$ mv main ./gameio
```
### Running
To generate a template for a stage:
```
$ ./gameio --language <language> --stage <stage>
```
To run your code against a stage:
```
$ ./gameio --language <language> --stage <stage> --filename <file>
```
|
## Inspiration
With the effects of climate change becoming more and more apparent, we wanted to make a tool that allows users to stay informed on current climate events and stay safe by being warned of nearby climate warnings.
## What it does
Our web app has two functions. One of the functions is to show a map of the entire world that displays markers on locations of current climate events like hurricanes, wildfires, etc. The other function allows users to submit their phone numbers to us, which subscribes the user to regular SMS updates through Twilio if there are any dangerous climate events in their vicinity. This SMS update is sent regardless of whether the user has the app open or not, allowing users to be sure that they will get the latest updates in case of any severe or dangerous weather patterns.
## How we built it
We used Angular to build our frontend. With that, we used the Google Maps API to show the world map along with markers, with information we got from our server. The server gets this climate data from the NASA EONET API. The server also uses Twilio along with Google Firebase to allow users to sign up and receive text message updates about severe climate events in their vicinity (within 50km).
## Challenges we ran into
For the front end, one of the biggest challenges was the markers on the map. Not only, did we need to place markers on many different climate event locations, but we wanted the markers to have different icons based on weather events. We also wanted to be able to filter the marker types for a better user experience. For the back end, we had challenges to figure out Twilio to be able to text users, Google firebase for user sign-in, and MongoDB for database operation. Using these tools was a challenge at first because this was our first time using these tools. We also ran into problems trying to accurately calculate a user's vicinity to current events due to the complex nature of geographical math, but after a lot of number crunching, and the use of a helpful library, we were accurately able to determine if any given event is within 50km of a users position based solely on the coordiantes.
## Accomplishments that we're proud of
We are really proud to make an app that not only informs users but can also help them in dangerous situations. We are also proud of ourselves for finding solutions to the tough technical challenges we ran into.
## What we learned
We learned how to use all the different tools that we used for the first time while making this project. We also refined our front-end and back-end experience and knowledge.
## What's next for Natural Event Tracker
We want to perhaps make the map run faster and have more features for the user, like more information, etc. We also are interested in finding more ways to help our users stay safer during future climate events that they may experience.
|
losing
|
# Journally - A journal entry a day. All through text.
## Welcome to Journally! Where we restore our memories one journal, one day at a time.
## Inspiration and What it Does
With everyone returning to their busy lives of work, commuting, school, and other commitments, people need an opportunity to restore their peace of mind. Journalling has been shown to improve mental health and can help restore memories, so that you don't get too caught up in the minutiae of life and can instead appreciate the big picture. *Journally* encourages you to quickly and easily record a daily journal entry - it's all done through text!
*Journally* sends you a daily text message reminder and then you simply reply back with whatever you want to record about your day. Your journal entries are available to view through the Journally website later, for whenever you want to take a walk down memory lane.
## Challenges and Major Accomplishments
This was the first full-stack project that either of us has completed, so there was definitely a lot of learning involved. In particular, integrating the many different servers was difficult -- Python Flask for sending and receiving text messages via the Twilio messaging API, a MySQL database, and the Node.js webserver. With so many complex parts, we were very proud of our ability to get it all running in under 24 hours! Moreover, we realized that this project was quite a bit for two people to complete. We weren't able to get everything to work perfectly, but at least we have a working product!
## What we learned
It was our first time working with API routings in Node.js and interacting with databases, so we learned a lot from that! We also learned how to work with Twilio's API using Flask. We had lots of fun sending ourselves a ton of test SMS messages.
## How we built it
* **Twilio** to send our registered users *daily* messages to Journal!
* Secure `MySQL` database to to store user registration info and their Journally entries
* `Flask` to *send* SMS from a user database of phone numbers
* `Flask` to *receive* SMS and store the user's Journallys into the database
* `Node.JS` for server routings, user registration on site, and storing user data into the database
* `Express.js` backend to host Journally
## Next Steps:
* allow simple markups like bolds in texts
* allow user to rate their day on a scale
* sort by scale feature
* Feel free to contribute!! Let's Journally together
# Check out our GitHub repo:
[GitHub](https://github.com/natalievolk/UofTHacks)
|
## Inspiration
Considering our team is so diverse (Pakistan, Sweden, Brazil, and Nigeria) it was natural for us to consider worldwide problems when creating our project. This problem especially has such a large societal impact, that we were very motivated to move towards a solution.
## What it does
Our service takes requests from users by SMS, which we then convert to an executable query. When the query result is received we send it back using SMS. Our application makes the process user-friendly and allows for more features when accessing the internet, such as ordering an Uber or ordering food.
## How we built it
The app can convert the user selection into text messages, sending them to our Twilio number.
We used the Twilio API to automatically manage these texts. Using C# and python scripts we convert the text into a google search, sending the result back as a text message.
## Challenges we ran into
The main challenge we faced was making the different protocols interact, it was also challenging to produce and debug everything under the time constraint.
## Accomplishments that we're proud of
We are very proud of our presentation and our creative solution. As well as having such an effective collaboration that enabled us to complete as much as we did.
We are very proud of how we successfully created a novel solution that is simple enough to be applicable on a large scale, having a large impact on the world.
## What we learned
We learned how to automize the management of text messages, and how to make the different protocols communicate correctly
## What's next for Access
What's next for Access is to expand our service, fulfilling the large potential that our solution has. We want to make more parts of the internet accessible through our service, make the process more efficient and most importantly extend our reach to those who need it the most.
|
## Inspiration
We were inspired to make this because we wanted to create an easy system for people who are often in a rush (like us) to read a text message and see what the overall weather was like- and get an idea of what they should wear.
## What it does
Allows a user to sign up on the website with their name, phone-number, and city they are from. Sends a text message with the day's weather conditions and suggestions of what to wear.
## How we built it
We created a Flask web application in order to get users' data which we then used to send out text messages with the Twilio SMS API on the weather conditions- which were parsed from a JSON object using the National Weather Service API. We then used the weather conditions to suggest what users' should wear.
## Challenges we ran into
This was both of our's first time creating a web application using Flask and Github. We learned how to use Gitbash, Flask, and integrate various APIs all within the past 2 days we were here! (Shoutout to Paul Jewell for all the help) One of the most challenging parts of this project was facing the intimidating amount of stuff we did not know how to do.
## Accomplishments that we're proud of
**Tyler:** I am most proud of using Pandas to clean up messy API JSON data such that it was more conducive to data manipulation.
**Elaine:** I was excited when I made my first push to Github using the command line for Windows (GitBash)! In the past, I had uploaded my files to Github by dragging them in so I was excited to learn how to properly use Github.
## What we learned
We learned that learning new frameworks and libraries requires a lot more time than we thought- each new thing we used required time to understand the documentation and what we had to do.
## What's next for Thunderwear
If we had more time, we'd want to learn how to implement a database so users could sign up. Our original idea was for a daily morning text on the weather and our suggestions for what to wear. We'd also want to learn how to host the website with a domain. Also, given more time- we'd want to allow user's to set their own temperature points for "hot", "warm", "chilly", and "cold". We'd also want to do more research into possible datasets for temperature points and what people normally wear.
|
partial
|
## Inspiration
The main focus of our project is creating opportunities for people to interact virtually and pursue their interests while remaining active. We hoped to accomplish this through a medium that people are already interested in and providing them with a tool to take that interest to the next level. From these intentions came our project- TikTok Dance Trainer.
In our previous hackathon, we gained experience with computer vision using OpenCV2 in python, and we wanted to look further in this field. Gaining inspiration from other projects that we saw, we wanted to create a project that could not only recognize hand movements but full body motion as well.
## What it does
TikTok Dance Trainer is a new Web App that enables its users to learn and replicate popular dances from TikTok. While using the app, users will receive a score in real time that gives them feedback on how well they are dancing compared to the original video. This web app is an encouraging way for beginners to hone dance skills and improve their TikTok content as well as a fun way for advanced users to compete against one another in perfecting dances.
## How we built it
To create this project, we split into teams. One team experimented with comparison metrics to compare body poses while the other built up the UI with HTML, CSS and Javascript.
The pose estimation is implemented with an open source pre-trained neural network in tensorflow called posenet. This model can pinpoint the key points on the human body such as wrists, elbows, hips, knees, and joints on the head. The two dancers each have a set of 17 joints, which are then compared to each other, frame by frame. In order to compare these arrays of coordinates, we researched various distance metrics to use such as the Euclidean Metric, Cosine Similarity, the weighted Manhattan distance, and Procrustes Analysis (Affine Transformation). Through data collection and trial and error, the cosine distance gave the best results in the end. The resulting distances were then fed into a function to map the values to viable player scores.
The UI is built up in HTML with CSS styling and Javascript to run its functions. It has a hand-drawn background and an easy-to-use design packed with function. The menu bar has a file selector for choosing and uploading a dance video to compare to. The three main cards of the UI have the reference video and live cam side by side, with pose-estimated skeletons of each in the middle to aid in matching the reference dance. The whole UI is built up in general keeping in mind ease of use, simplicity, visual appeal and functionality.
## Challenges we ran into
As a result of splitting into two teams for different parts of the project, one challenge we faced was merging the two parts. It was difficult to both combine the code but as well to connect the different parts of it, returning outputs from one part as acceptable inputs for another. Through perseverance and a lot of communication we managed to effectively merge the two parts.
## Accomplishments that we're proud of
We managed to create a clean looking app that performs the algorithm well despite the time pressure and complexity of the project. In addition, we were able to allocate time into making a presentation with a skit to tie everything together.
## What we learned
Coming into this hackathon, only one of our members was experienced in web development, but coming out, all of us four felt that we gained valuable experience and insight into the ins and outs of webpages. We learned how to effectively use Node.js to create a backend and connect it with our frontend. Along with this, we gained experience using npm and many of javascript's potpourri of packages such as browserify.
## What's next for TikTok Dance Trainer
We also looked into using Dynamic Time Warping to help with the comparison. This would help primarily when the videos were different lengths or if the dancers were slightly mismatched. However, we realized that this would not be needed if the user is dancing against the TikTok video in their own live feed. In the future, we would like to add a functionality that allows two pre-recorded videos to be compared that would then use Dynamic Time Warping.
All open source repositories/packages that were used:
[link](https://github.com/tensorflow/tfjs-models/tree/master/posenet)
[link](https://github.com/compute-io/cosine-similarity)
[link](https://github.com/GordonLesti/dynamic-time-warping)
[link](https://github.com/browserify/browserify)
[link](https://github.com/ml5js/ml5-library)
|
## Inspiration
We wanted to take an ancient video game that jump-started the video game industry (Asteroids) and be able to revive it using Virtual Reality.
## What it does
You are spawned in a world that has randomly generated asteroids and must approach one of four green asteroids to beat the stage. Lasers are utilized to destroy asteroids after a certain amount of collisions. Forty asteroids are always present during gameplay and are attracted to you via gravity.
## How we built it
We utilized Unity (C#) alongside an HTC Vive libraries. The Asset Store was utilized to have celestial images for our skybox environment.
## Challenges we ran into
Our Graphical User Interface is not able to be projected to the HTC Vive. Each asteroid has an attractive force towards the player; it was difficult to optimize how all of these forces were rendered and preventing them from interfering with each other. Generalizing projectile functionality across game and menu scenes was also difficult.
## Accomplishments that we're proud of
For most of the group, this was the first time we had experienced using Unity and the first time using an HTC Vive for all members. Learning the Unity workflow and development environment made us all proud and solving the problem of randomly generated asteroids causing interference with any other game objects.
## What we learned
We understood how to create interactive 3D Video Games alongside with a Virtual Reality environment. We learned how to map inputs from VR Controllers to the program.
## What's next for Estar Guars
Applying a scoring system and upgrades menu would be ideal for the game. Improve controls and polishing object collision animations. Figuring out a GUI that displays on to the HTC Vive. Having an interactive story mode to create more dynamic objects and environments.
|
## Inspiration
I
## What it does
Adding movement and dance to programming to appeal to a wider range of children and students
## How we built it
UX & Data flow
1. Users interact with gestures and voice
2. Uses machine learning to learn new gestures & recognise previously trained gestures
3. Translate to pseudocode
* **Design**: Figma
* **Frontend**: React & Redux, Material UI
* **Backend**: Python Flask REST API
* **Machine Learning**: Tensorflow & Keras
* **Deployment**: Google Cloud
* **APIs**: Codemirror, Google NLP
## Challenges we ran into
Deployment on cloud, a lot of data processing, cleaning, building efficient ML pipeline.
## Accomplishments that we're proud of
## What we learned
## What's next for Dance Dance Convolution
### Features
* peer-programming (web sockets)
* input by voice commands\*\* (NLP)
* gaze inference => no more need for mouse!
* Compile to various languages
### Userbase for pilot project
* schools at all levels, from kindergarten to university
* khan academy
* girlswhocode
* Africa Code Week
### Tech assistance & funding
* 1517 / Thiel Fellowship
* ycombinator
* tensorflow partners
* Western University
|
partial
|
## Inspiration
Recently, there’s been rising concern over how much power social media companies have over the freedom of speech. The concept of social media has always existed but usually required users to migrate to a different platform which many (including us) would prefer not to do. So we decided to build something that would allow for decentralization of speech on social media while still allowing us to use the platforms we know and love.
## What it does
Permanent Record is a chrome extension that can save a social media post and immortalize it in the blockchain where it is tamper-proof.
## How we built it
We developed a EVM-based smart-contract that allows for customized minting of posts and deployed on Caldera’s network. We then built a chrome extension that communicates with the contract to mint an NFT whenever any user wants to archive a post on the blockchain.
## What's next for Permanent Record
We plan to website that highlights the entire collection of all immortalized posts as well as a specified section for displaying deleted posts that have received the most attention.
|
## Inspiration
We were inspired to create this by looking at the lack of resources for people to access information about their medications with just a prescription or report such as blood tests
## What it does
Takes image of prescriptions, blood tests, X rays or any medical records -> performs OCR or image recognization according to the record provided -> converts text to fetch more info from the web -> stores data -> predicts health
## How we built it
Using python and google and IBM's machine learning API's
## Challenges we ran into
Integrating the learning into our platform.
## Accomplishments that we're proud of
The web app
## What we learned
How to create a learning web platform
## What's next for MedPred
To make it a platform for long term health predictions
|
## Inspiration
Blockchain has created new opportunities for financial empowerment and decentralized finance (DeFi), but it also introduces several new considerations. Despite its potential for equitability, malicious actors can currently take advantage of it to launder money and fund criminal activities. There has been a recent wave of effort to introduce regulations for crypto, but the ease of money laundering proves to be a serious challenge for regulatory bodies like the Canadian Revenue Agency. Recognizing these dangers, we aimed to tackle this issue through BlockXism!
## What it does
BlockXism is an attempt at placing more transparency in the blockchain ecosystem, through a simple verification system. It consists of (1) a self-authenticating service, (2) a ledger of verified users, and (3) rules for how verified and unverified users interact. Users can "verify" themselves by giving proof of identity to our self-authenticating service, which stores their encrypted identity on-chain. A ledger of verified users keeps track of which addresses have been verified, without giving away personal information. Finally, users will lose verification status if they make transactions with an unverified address, preventing suspicious funds from ever entering the verified economy. Importantly, verified users will remain anonymous as long as they are in good standing. Otherwise, such as if they transact with an unverified user, a regulatory body (like the CRA) will gain permission to view their identity (as determined by a smart contract).
Through this system, we create a verified market, where suspicious funds cannot enter the verified economy while flagging suspicious activity. With the addition of a legislation piece (e.g. requiring banks and stores to be verified and only transact with verified users), BlockXism creates a safer and more regulated crypto ecosystem, while maintaining benefits like blockchain’s decentralization, absence of a middleman, and anonymity.
## How we built it
BlockXism is built on a smart contract written in Solidity, which manages the ledger. For our self-authenticating service, we incorporated Circle wallets, which we plan to integrate into a self-sovereign identification system. We simulated the chain locally using Ganache and Metamask. On the application side, we used a combination of React, Tailwind, and ethers.js for the frontend and Express and MongoDB for our backend.
## Challenges we ran into
A challenge we faced was overcoming the constraints when connecting the different tools with one another, meaning we often ran into issues with our fetch requests. For instance, we realized you can only call MetaMask from the frontend, so we had to find an alternative for the backend. Additionally, there were multiple issues with versioning in our local test chain, leading to inconsistent behaviour and some very strange bugs.
## Accomplishments that we're proud of
Since most of our team had limited exposure to blockchain prior to this hackathon, we are proud to have quickly learned about the technologies used in a crypto ecosystem. We are also proud to have built a fully working full-stack web3 MVP with many of the features we originally planned to incorporate.
## What we learned
Firstly, from researching cryptocurrency transactions and fraud prevention on the blockchain, we learned about the advantages and challenges at the intersection of blockchain and finance. We also learned how to simulate how users interact with one another blockchain, such as through peer-to-peer verification and making secure transactions using Circle wallets. Furthermore, we learned how to write smart contracts and implement them with a web application.
## What's next for BlockXism
We plan to use IPFS instead of using MongoDB to better maintain decentralization. For our self-sovereign identity service, we want to incorporate an API to recognize valid proof of ID, and potentially move the logic into another smart contract. Finally, we plan on having a chain scraper to automatically recognize unverified transactions and edit the ledger accordingly.
|
losing
|
## Inspiration
Although each of us came from different backgrounds, we each share similar experiences/challenges during our high school years: it was extremely hard to visualize difficult concepts, much less understand the the various complex interactions. This was most prominent in chemistry, where 3D molecular models were simply nonexistent, and 2D visualizations only served to increase confusion. Sometimes, teachers would use a combination of Styrofoam balls, toothpicks and pens to attempt to demonstrate, yet despite their efforts, there was very little effect. Thus, we decided to make an application which facilitates student comprehension by allowing them to take a picture of troubling text/images and get an interactive 3D augmented reality model.
## What it does
The app is split between two interfaces: one for text visualization, and another for diagram visualization. The app is currently functional solely with Chemistry, but can easily be expanded to other subjects as well.
If the text visualization is chosen, an in-built camera pops up and allows the user to take a picture of the body of text. We used Google's ML-Kit to parse the text on the image into a string, and ran a NLP algorithm (Rapid Automatic Keyword Extraction) to generate a comprehensive flashcard list. Users can click on each flashcard to see an interactive 3D model of the element, zooming and rotating it so it can be seen from every angle. If more information is desired, a Wikipedia tab can be pulled up by swiping upwards.
If diagram visualization is chosen, the camera remains perpetually on for the user to focus on a specific diagram. An augmented reality model will float above the corresponding diagrams, which can be clicked on for further enlargement and interaction.
## How we built it
Android Studio, Unity, Blender, Google ML-Kit
## Challenges we ran into
Developing and integrating 3D Models into the corresponding environments.
Merging the Unity and Android Studio mobile applications into a single cohesive interface.
## What's next for Stud\_Vision
The next step of our mobile application is increasing the database of 3D Models to include a wider variety of keywords. We also aim to be able to integrate with other core scholastic subjects, such as History and Math.
|
## Inspiration
For our first ever attempt at mobile application development, we wanted to create something simple yet fun at the same time.
## What it does
It doesn't do anything, really. Just tap on Tappy to accumulate points and get that false sense of achievement.
## How we built it
We used Android Studio and Java.
## Challenges we ran into
None of us were familiar with mobile development so we had to play around a lot with the functionalities.
## Accomplishments that we're proud of
We made our first mobile application.
## What we learned
We learned the basics of mobile application development for Android using Java.
## What's next for Tappy Bird
Nothing; it's completely useless.
|
## Inspiration
As students of chemical engineering, we are deeply interested in educating students about the wonderful field of chemistry. One of the most difficult things in chemistry is visualizing molecules in 3 dimensions. So for this project, we wanted to make visualizing molecules easy and interactive for students in addition to making it easy for teachers to implement in the classroom.
## What it does
Aimed towards education, ChemistryGO aims to bring 3 dimensional molecular models to students in a convenient way. For example, teachers can integrate pre-prepared pictures of Lewis structures on powerpoints and students can use the ChemistryGO Android application with Google Cardboard (optional) to point their phones to the screen and a 3D ball and stick model representation will pop up. The molecule can be oriented with just a slide of a finger.
## How we built it
The building of the application was split into three parts:
1. Front End Unity Development
We used C# to add functions regarding the orientation of the molecule. We also implemented Vuforia API to pair 3D models with target images.
2. Database creation
Vuforia helped us build a database of target images that were going to be used to pair with 3D models. Scripting was used with Chemspider to gather the list of target images of molecules.
3. Database extraction
For 3D models, PDB files (strict typed file of 3D models, usually used for proteins) of common chemistry molecules were collected and opened with UCSF Chimera, a molecular visualization tool, and converted to .dae files, which were used Unity to produce the model.
## Challenges we ran into
The fact that there were few simple molecules that were already in PDB format made it difficult to make a large database. Needing the image on screen for the 3D molecule to stay. Rotating the 3D molecule. Having a automated method to build the database due to the number of data points and software needed.
## Accomplishments that we're proud of
* Use of PDB formatted files.
* Implemented Vuforia API for image recognition
* Creating an educational application
## What we learned
* Unity
## What's next for ChemistryGO
* Creating the database by script creating PDB files
* Automation of database creation
* A user-friendly interface to upload pictures to corresponding 3D models
* A more robust Android application
* Machine learning molecules
|
winning
|
## Inspiration
**Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness.
## Problem
Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in.
Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting.
## What is fairness?
There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group.
## What our app does
**jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness.
### Reweighing Algorithm
If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training.
## How we built it
We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier.
## Challenges we ran into
Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric.
## Accomplishments that we're proud of
We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her.
## What we learned
Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts.
## What's next for jobFAIR
Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
|
## Inspiration
There has never been a more relevant time in political history for technology to shape our discourse. Clara AI can help you understand what you're reading, giving you political classification and sentiment analysis so you understand the bias in your news.
## What it does
Clara searches for news on an inputted subject and classifies its political leaning and sentiment. She can accept voice commands through our web application, searching for political news on a given topic, and if further prompted, can give political and sentiment analysis. With 88% accuracy on our test set, Clara is nearly perfect at predicting political leaning. She was trained using random forest and many hours of manual classification. Clara gives sentiment scores with the help of IBM Watson and Google Sentiment Analysis APIs.
## How we built it
We built a fundamental technology using a plethora of Google Cloud Services on the backend, trained a classifier to identify political leanings, and then created multiple channels for users to interact with the insight generated by our algorithms.
For our backend, we used Flask + Google Firebase. Within Flask, we used the Google Search Engine API, Google Web Search API, Google Vision API, and Sklearn to conduct analysis on the news source inputted by the user.
For our web app we used React + Google Cloud Speech Recognition API (the app responds to voice commands). We also deployed a Facebook Messenger bot, as many of our users find their news on Facebook.
## Challenges we ran into
Lack of wifi was the biggest, putting together all of our APIs, training our ML algorithm, and deciding on a platform for interaction.
## Accomplishments that we're proud of
We've created something really meaningful that can actually classify news. We're proud of the work we put in and our persistence through many caffeinated hours. We can't wait to show our project to others who are interested in learning more about their news!
## What we learned
How to integrate Google APIs into our Flask backend, and how to work with speech capability.
## What's next for Clara AI
We want to improve upon the application by properly distributing it to the right channels. One of our team members is part of a group of students at UC Berkeley that builds these types of apps for fun, including BotCheck.Me and Newsbot. We plan to continue this work with them.
|
## Inspiration
Lots of qualified candidates missed out great opportunities not because lack of skills, but because they missed common interview techniques (ex: facial expression, confidence, etc). We believe AI can help solved it.
## What it does
It consists of two parts: Interview Preparation & Realtime Prompting. Interview Prep allow step by step practice with practical feedbacks to help candidate learn by repeating. In Realtime Prompting, it provide quick un-obstructive prompt to help improve proper behaviors.
## How we built it
We build the backend using Python and leverage Groq API, HUME, and AWS for the AI services. LLM we use the latest LLAMA3. Front end we employs React.
## Challenges we ran into
We ran into coding dependencies issues with HUME and general challenges using API.
## Accomplishments that we're proud of
We were able to built a working demo that provide real-time feedback on Speech-To-Text and sentiment analysis.
## What we learned
Team work, collaborate is the key to a successful projects. And everyone with different skills set really help put everythings together.
## What's next for Interview-IQ
Two things: Get adoption and scale the infrastructure. We want to leverage our college network to get feedback/improvements while building the resource to scale the production.
|
winning
|
## 💡 Inspiration
It's job search season (again)! One of the most nerve-wracking aspects of applying for a job is the behavioural interview, yet there lacks a method to help interviewees prepare effectively. Most existing services are not able to emulate real-life interview scenarios sufficiently, given that the extent of the practice questions asked are limited without knowledge of the interviewee's experience. In addition, interviewers rarely provide constructive feedback, leaving interviewees feeling confused and without a plan for improvement.
## 💻 What it does
HonkHonkHire is an AI-powered interview practice application that analyzes your unique experiences (via LinkedIn/resume/portfolio) and generates interview questions and feedback tailored for you. By evaluating facial expressions and transcribing your responses, HonkHonkHire provides useful metrics to give actionable feedback and help users gain confidence for their job interviews.
## ⚒️ How we built it
HonkHonkHire was built on Node.JS back-end, with MongoDB Atlas Database to store user's data. We also used Firebase to store resume .pdf files. The user interface was designed in Figma, and developed using Javascript, HTML and CSS. The facial recognition was implemented using Google MediaPipe Studio, and Cohere's API was used to generate the personalized questions and feedback.
## 🪖 Challenges we ran into
This project itself was very ambitious for our team, given that it involved learning and applying a lot of unfamiliar technologies in a short period of time. As we had never worked together as a team before, it took some time to familiarize ourselves with each other's strengths, weaknesses, and communication styles.
## 🌟 Accomplishments that we're proud of
Our team is proud that we were able to deliver majority of the MVP features, and to have created something we would all love to use for our future job searches. Learning new technologies such as Cohere's API and OCR Space, and being able to employ them in our application was also very rewarding.
## 🧠 What we learned
Each one of our team members challenged themselves while creating this application. Our designer ventured into the world of programming and learned some HTML/CSS while coding a webpage for the very first time (yay)! Our front-end developer challenged herself by focusing on fundamentals such as Vanilla JS, instead of using more familiar frameworks. Another one of our developers learned about new APIs by reading documentations, and transferring data from front-end to back-end (and vice versa) using Node.JS. Our primary back-end developer challenged himself to explore facial/emotional expression and behavioural motion detection for the first time.
## 👀 What's next for HonkHonkHire
We would love to further enhance user experience by offering more detailed metrics such as talking pace, response length, and tone of voice. Through the use of these metrics, we hope to give users the ability to track their improvements over their professional journey and encourage them to continue to improve upon their behavioural interview skills.
|
## Inspiration
We were inspired by JetBlue's challenge to utilize their data in a new way and we realized that, while there are plenty of websites and phone applications that allow you to find the best flight deal, there is none that provide a way to easily plan the trip and items you will need with your friends and family.
## What it does
GrouPlane allows users to create "Rooms" tied to their user account with each room representing an unique event, such as a flight from Toronto to Boston for a week. Within the room, users can select flight times, see the best flight deal, and plan out what they'll need to bring with them. Users can also share the room's unique ID with their friends who can then utilize this ID to join the created room, being able to see the flight plan and modify the needed items.
## How we built it
GrouPlane was built utilizing Android Studio with Firebase, the Google Cloud Platform Authentication API, and JetBlue flight information. Within Android Studio, Java code and XML was utilized.
## Challenges we ran into
The challenges we ran into was learning how to use Android Studio/GCP/Firebase, and having to overcome the slow Internet speed present at the event. In terms of Android Studio/GCP/Firebase, we were all either entirely new or very new to the environment and so had to learn how to access and utilize all the features available. The slow Internet speed was a challenge due to not only making it difficult to learn for the former tools, but, due to the online nature of the database, having long periods of time where we could not test our code due to having no way to connect to the database.
## Accomplishments that we're proud of
We are proud of being able to finish the application despite the challenges. Not only were we able to overcome these challenges but we were able to build an application there functions to the full extent we intended while having an easy to use interface.
## What we learned
We learned a lot about how to program Android applications and how to utilize the Google Cloud Platform, specifically Firebase and Google Authentication.
## What's next for GrouPlane
GrouPlane has many possible avenues for expansion, in particular we would like to integrate GrouPlane with Airbnb, Hotel chains, and Amazon Alexa. In terms of Airbnb and hotel chains, we would utilize their APIs in order to pull information about hotel deals for the flight locations picked for users can plan out their entire trip within GrouPlane. With this integration, we would also expand GrouPlane to be able to inform everyone within the "event room" about how much the event will cost each person. We would also integrate Amazon Alexa with GrouPlane in order to provide users the ability to plane out their vacation entirely through the speech interface provided by Alexa rather than having to type on their phone.
|
## Inspiration
We noticed that individuals with learning and reading disabilities had struggles accessing information online due to the way it was presented. To combat this, we developed a chrome extension that flattens the barriers to access of information. This information could be anything from news to scientific articles. It can assist in bringing complex pieces of knowledge to younger people or older people alike.
## What it does
This extension uses NLP to find out the user need and wants in terms of accessibility. It gathers this data and then uses custom CSS injections to change webpages into a more accessible form. It will also break apart harder terms in sentences and define them in the page for the user. We also provide the users with the ability to summarize and define terms with a very simple UI.
## How we built it
We built our NLP implementation using Hugging Face. We also used LLM models from OpenAI to determine what words are difficult for our user. This API was also used for summarizing and defining terms. On the frontend we used React JS TypeScript Node.js HTML and CSS. JS and our framework was based on TypeScript using a webpack
## Challenges we ran into
Some of the major challenges we ran into where that because we were developing a chrome extension there was a very limited amount of documentation and tutorials on the development and building of a chrome extension when compared to an app or website.
## Accomplishments that we're proud of
* We are proud of implementing a NLP model to categorize text
* Taking years off our life with caffeine
## What we learned
How to develop a chrome extension
How to use TypeScript framework with a webpack
## What's next for EmpathiRead
* We will implement a audio to text and text to speech interface for our users to conversate with our extension. This will allow the application to be seamlessly and effortlessly interfaced with.
* We will also implement custom CSS injections for popular websites and partner with companies to make their websites more accessible.
|
partial
|
## Inspiration
A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach.
#### PillAR is your personal augmented reality pill/medicine tracker.
It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing.
We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines.
## How we built it
We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app.
## Accomplishments that we're proud of
This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand.
## What's next for PillAR
In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier.
|
## Inspiration
**75% of adults over the age of 50** take prescription medication on a regular basis. Of these people, **over half** do not take their medication as prescribed - either taking them too early (causing toxic effects) or taking them too late (non-therapeutic). This type of medication non-adherence causes adverse drug reactions which is costing the Canadian government over **$8 billion** in hospitalization fees every year. Further, the current process of prescription between physicians and patients is extremely time-consuming and lacks transparency and accountability. There's a huge opportunity for a product to help facilitate the **medication adherence and refill process** between these two parties to not only reduce the effects of non-adherence but also to help save tremendous amounts of tax-paying dollars.
## What it does
**EZPill** is a platform that consists of a **web application** (for physicians) and a **mobile app** (for patients). Doctors first create a prescription in the web app by filling in information including the medication name and indications such as dosage quantity, dosage timing, total quantity, etc. This prescription generates a unique prescription ID and is translated into a QR code that practitioners can print and attach to their physical prescriptions. The patient then has two choices: 1) to either create an account on **EZPill** and scan the QR code (which automatically loads all prescription data to their account and connects with the web app), or 2) choose to not use EZPill (prescription will not be tied to the patient). This choice of data assignment method not only provides a mechanism for easy onboarding to **EZPill**, but makes sure that the privacy of the patients’ data is not compromised by not tying the prescription data to any patient **UNTIL** the patient consents by scanning the QR code and agreeing to the terms and conditions.
Once the patient has signed up, the mobile app acts as a simple **tracking tool** while the medicines are consumed, but also serves as a quick **communication tool** to quickly reach physicians to either request a refill or to schedule the next check-up once all the medication has been consumed.
## How we built it
We split our team into 4 roles: API, Mobile, Web, and UI/UX Design.
* **API**: A Golang Web Server on an Alpine Linux Docker image. The Docker image is built from a laptop and pushed to DockerHub; our **Azure App Service** deployment can then pull it and update the deployment. This process was automated with use of Makefiles and the **Azure** (az) **CLI** (Command Line Interface). The db implementation is a wrapper around MongoDB (**Azure CosmosDB**).
* **Mobile Client**: A client targeted exclusively at patients, written in swift for iOS.
* **Web Client**: A client targeted exclusively at healthcare providers, written in HTML & JavaScript. The Web Client is also hosted on **Azure**.
* **UI/UX Design**: Userflow was first mapped with the entire team's input. The wireframes were then created using Adobe XD in parallel with development, and the icons were vectorized using Gravit Designer to build a custom assets inventory.
## Challenges we ran into
* Using AJAX to build dynamically rendering websites
## Accomplishments that we're proud of
* Built an efficient privacy-conscious QR sign-up flow
* Wrote a custom MongoDB driver in Go to use Azure's CosmosDB
* Recognized the needs of our two customers and tailored the delivery of the platform to their needs
## What we learned
* We learned the concept of "Collections" and "Documents" in the Mongo(NoSQL)DB
## What's next for EZPill
There are a few startups in Toronto (such as MedMe, Livi, etc.) that are trying to solve this same problem through a pure hardware solution using a physical pill dispenser. We hope to **collaborate** with them by providing the software solution in addition to their hardware solution to create a more **complete product**.
|
## Inspiration
Hi! I am a creator of Snapill. Recently, I had to undergo wisdom tooth surgery. The surgery wasn't that bad at all, however, I was prescribed around 5 different medications, from Tylenol to NSAIDs to Steroids. It was hard to keep track of all these medications, and as a result, I ended up going through more pain during my recovery time.
However, to other people, the punishment of medication error is far more dangerous. Around half of drug-related deaths in the US is related to prescription drug use and error (22,000 people, NCBI). Furthermore, it is much harder for older people or those going through complex diseases/surgeries to keep track of their medication requirements. My personal experience, along with these pressing issues, has motivated me to design **Snapill**, a Computer-Vision based Medication App, in order to improve patient safety and awareness towards the power a single pill has.
## What it does
You may notice Snapill is very similar to the popular messaging app Snapchat! This was intentional, as our goal is to ... . Asides from the UI, Snapill is very powerful and differs from a common medication app.
Specifically, Snapill is the first app that leverages advanced Computer Vision techniques like homography with your common LLM. The combination allows for robustness and covers several user failure points (mainly: bad lighting). Snapill allows a patient to scan their medication vial, and automatically generates important metadata like prescription name, expiration date, dosage, and possible times to take the pill. Furthermore, we believe awareness is a strong factor for increased patient safety, so we incorporated the Cerebras AI Inference models to allow users to "chat" with their medications (we weren't joking when we alluded to SnapChat). Users are able to request more information about the drug through the chat, along with a personal vanguard that scans for incompatibility issues between medications. Finally, Snapill provides reminders when it is time for users to take their medications (we couldn't forget that part, could we?)
## How we built it
Knowing we needed data, me and Shishir worked with Scriptpro, a pharmaceutical company in Kansas. After pitching our ideas to a senior software engineer (a warm reception), we were able to acquire around 20 vials with different medication names. They were used to train both a Roboflow model to detect labels, and test a custom OpenCV unwarping pipeline.
Snapill is a combination of a react-native app ran on the user's phone that connects to a Flask backend hosted on Heroku. This allows us to use homemade openCV functions while calling APIs such as Roboflow, Cerebras, and Firebase. User data, auth, and storage is all stored through firebase, allowing for simple public downloads from other APIs. We tested the app using both an Android simulator and a physical iPhone. See below for more information about our backend process!
## Challenges we ran into
Our first challenge was that UPenn did not allow direct connection in a way. Specifically, we weren't able to run a local backend on the computer and forward its port so a phone could send requests on the same WiFi. This led us to deploy the backend on Heroku, which unfortunately slowed down build times but taught us a lot about deploying on production lines!
Another challenge we ran into was integrating our custom OpenCV code with Roboflow's Workflow design. However, it made us glad we chose to use a Python backend to process Computer Vision! We solved this issue by sandwiching our custom unwarp code between two different Roboflow Workflows.
## Accomplishments that we're proud of
The main accomplishment we're proud of was our CV pipeline. Regularly, OCR (Optical Character Recognition) is used for parsing data from images, however we noticed it has a very high error rate when trying to detect text on curved surfaces. To solve this, me and Shishir researched and utilized a pipeline that uses concepts from linear algebra to essentially map the text on the curved label to a flat rectangle. This wasn't easy because there were very few resources for using OCR in edge cases like this. But we curved around (pun not intended) and explored the depths of OpenCV to achieve our goal. OCR was much more accurate now and was able to safely parse label data.
Even cooler, is that it could be scaled to pretty much any text on the curved part of a cylinder! All the method needs are certain extreme points, and the rest is up to interpolation. After Pennapps, I plan to test the pipeline more, and see how to automatically detect the points using edge/contour recognition.
## What we learned
For one of us, this was his first time both attending a hackathon AND actually creating a mobile app. While he knew React from previous projects, he still had to learn how to use expo along with several other libraries, and was able to pull it off! We all also learnt about how important reliability is when making patient technology solutions. Handling important data like dosage should be done so carefully.
## What's next for Snapill
The next update for Snapill would be finding a way to make the refilling process easier. Usually, there are phone numbers on vial labels, which would be a good next step to detect.
|
winning
|
## 📚🙋♂️Domain Name📚🙋♂️
**DOMAIN.COM: AUTINJOY-WITH.TECH**
## 💡Inspiration💡
Parents know that travelling can be an eye-opening experience for kids, but they also know that travelling even with one kid can be overwhelming, involving lots of bathroom stops and needing to manage boredom, motion sickness, and more.
When parents have an autistic kid, it means they have to consider additional factors when travelling. Because when places with sensory overload, closed spaces, or constant movement stress out an autistic child, the child can have what's called an "autistic meltdown", which is very difficult to manage temper tantrums.
So what if they had an app that could help them avoid triggering these meltdowns?
## ❓What it does❓
For parents and the caring adults they trust like a babysitter, the app allows them to:
* journal when autistic meltdowns happen and how they manifested
* visualize this data in a structured way to understand when and how meltdowns happen
* find whether businesses and attractions they plan to visit are unexpectedly crowded or have areas to look out for that could trigger meltdowns
* contact destinations like airports, hotels, and museums in advance in case accommodations can be made
For kids, the app helps them:
* communicate their needs; and
* journal their thoughts and feelings
For businesses, the app allows them to:
* alert users when they unexpectedly have large crowds when users search for them
* proactively inform users of areas with potential triggers like sensory overload from loud noises or bright lights; and
* receive feedback anonymously from users when kids have a meltdown and any relevant environmental factor so that they might decide to put a sign up to caution visitors or indicate this info somewhere
## 🏗️How we built it🏗️
We built our app using HTML, CSS, and JavaScript for the dashboard and Figma for the mobile app.
1. We Use HTML/CSS/JS for building the dashboard.
2. We Use Auth0 as the login and Signup page.
3. We Use Twilio for Multi-factor Authentication and Connecting SMS from Doctors or Social Groups.
4. We Use FIGMA for Mobile Application.
5. We use Google Maps for Localization and Addressing Restaurants or Stoppages of the Journey.
6. We use Google Cloud for the deployment of the Dashboard
## 🚧Challenges we ran into🚧
* Managing a team in different time zones
* Limited coding ability on the team
## 🏅🏆ACCOMPLISHMENTS THAT WE ARE PROUD OF🏅🏆
We worked on an idea that has real-world application! We are happy to have devised a solution to a critical problem that affects such a vast number of people with no available methods or resources to assist them. We learned a lot while working on the project not just technically but also in time management. We are proud we could complete the project and deliver a beautiful fully functional Hack this weekend.
## 📚🙋♂️What we learned📚🙋♂️
We learned so much about HTML and CSS Design while completing Dashboard. Implementing UI/UX to the fullest of our Capabilities using FIGMA to make it look Aesthetic and Attractive for users. We were able to manage the time well and Implement the ideas we had brainstormed. We also learnt the critical use of Backend Services like Twilio and Auth0 for authentication. Google Maps and Google Cloud made the application more userfriendly and helped us learn about deployment. The themes of the website tried to make it as visually attractive as possible for the Autistic Kids to like and Enjoy.
## 💭What's next for Autinjoy💭
1. User testing and feedback by real-time Users
2. Deployment at a large scale for Social Good & Travel ease
3. Auto-generating the travel itinerary using AI & Machine Learning
4. Making a Weekly schedule for Management of Therapy Sessions
|
## Inspiration
Cryptocurrency is the new hype of our age. We wanted to explore the possibilities of managing Cryptocurrency transactions at the tips of our fingers through social media outlets. At the same time, we wanted to tackle the problem of splitting bills when we eat out with friends, through sending Ethereum to settle payments.
## What it does
Our bot has 6 main commands that can be used after setting up with your Facebook account & the public key of your EtherWallet via cryptpay.tech and installing the application to your local computer:
* /send - sends a set amount to designated user.
* /confirm - accepts payment on receiver's end.
* /split - splits bill to number of people in chat.
* /dist - distributes amount per person.
* /receipt - takes picture of receipt and splits bill based on user's prompts.
* /sell - sells amount to market.
We use these commands on the FB chat to facilitate real time transactions.
## How We built it
With security in mind and developing around the spirit of decentralization - a user's wallet/private key never leaves their computer. The architecture of the entire project, as such, was more difficult than your average chatbot.
There are 3 main components to this project:
##### Local Chatbot/Wallet
If we hosted a central chatbot that managed everyone's funds, that would have destroyed the purpose of using cryptocurrency as our medium. As such, we developed a chatbot/wallet hybrid that allows users to have the full functionality of a server-sided bot, right in their hands and in control. We had the user input their wallet details, and by using offline transaction signing, users are not required to run a full Ethereum node but still interact with the blockchain network using Messenger.
##### CryptPay.tech
Lets say `Person A` wants to send a payment of $10 to `Person B` using CryptPay. `Person A` will have to send a transaction to `Person B`'s public key (which can be thought of as their house address). CryptPay.tech allows friends to find each other's public keys, without even asking for them beyond the one-time setup. This means, you don't have to ask for their email address nor their long hexadecimal public key. We do it for you.
##### Receipt Scanning + Other Features
Any user can use the /receipt command to prompt the receipt bill splitting function. CryptPay will ask the user to take a photo of their recent receipt transaction and analyze the purchases. Using the Google Vision API and Tesseract OCR API, we are able to instantaneously identify the total amount of the purchase. The user can then use /split to equally distribute the bill to each member in the chat.
## Challenges We Ran Into
Originally, we contemplated creating a messenger bot for transactions with real money. However, this elicits substantial security issues, since it is not secure for third parties to hold people's private banking information. We spoke with representatives from Scotiabank about our concerns and asked for other possible issues to tackle. After discussion, we decided to use Cryptocurrency transactions because they bypass the Interac debit system and everything is fluid.
## Accomplishments that We're Proud of
* Learning how to use Facebook Messenger API
* Creating a packaging a full node application for end-users
* Learning to architect the project in a unconventional way
* Exploring REST
* Setting up fluid transactions with Ethereum
* Having a fully functional prototype within 24 hours
* Creating something that is easy to use and that everyone can use
## What's next for CryptPay
* Adding more crypto coins
* Getting a chance to cancel your sending
* Have a command for market research
|
View the SlideDeck for this project at: [slides](https://docs.google.com/presentation/d/1G1M9v0Vk2-tAhulnirHIsoivKq3WK7E2tx3RZW12Zas/edit?usp=sharing)
## Inspiration / Why
It is no surprise that mental health has been a prevailing issue in modern society. 16.2 million adults in the US and 300 million people in the world have depression according to the World Health Organization. Nearly 50 percent of all people diagnosed with depression are also diagnosed with anxiety. Furthermore, anxiety and depression rates are a rising issue among the teenage and adolescent population. About 20 percent of all teens experience depression before they reach adulthood, and only 30 percent of depressed teens are being treated for it.
To help battle for mental well-being within this space, we created DearAI. Since many teenagers do not actively seek out support for potential mental health issues (either due to financial or personal reasons), we want to find a way to inform teens about their emotions using machine learning and NLP and recommend to them activities designed to improve their well-being.
## Our Product:
To help us achieve this goal, we wanted to create an app that integrated journaling, a great way for users to input and track their emotions over time. Journaling has been shown to reduce stress, improve immune function, boost mood, and strengthen emotional functions. Journaling apps already exist, however, our app performs sentiment analysis on the user entries to help users be aware of and keep track of their emotions over time.
Furthermore, every time a user inputs an entry, we want to recommend the user something that will lighten up their day if they are having a bad day, or something that will keep their day strong if they are having a good day. As a result, if the natural language processing results return a negative sentiment like fear or sadness, we will recommend a variety of prescriptions from meditation, which has shown to decrease anxiety and depression, to cat videos on Youtube. We currently also recommend dining options and can expand these recommendations to other activities such as outdoors activities (i.e. hiking, climbing) or movies.
**We want to improve the mental well-being and lifestyle of our users through machine learning and journaling.This is why we created DearAI.**
## Implementation / How
Research has found that ML/AI can detect the emotions of a user better than the user themself can. As a result, we leveraged the power of IBM Watson’s NLP algorithms to extract the sentiments within a user’s textual journal entries. With the user’s emotions now quantified, DearAI then makes recommendations to either improve or strengthen the user’s current state of mind. The program makes a series of requests to various API endpoints, and we explored many APIs including Yelp, Spotify, OMDb, and Youtube. Their databases have been integrated and this has allowed us to curate the content of the recommendation based on the user’s specific emotion, because not all forms of entertainment are relevant to all emotions.
For example, the detection of sadness could result in recommendations ranging from guided meditation to comedy. Each journal entry is also saved so that users can monitor the development of their emotions over time.
## Future
There are a considerable amount of features that we did not have the opportunity to implement that we believe would have improved the app experience. In the future, we would like to include video and audio recording so that the user can feel more natural speaking their thoughts and also so that we can use computer vision analysis on the video to help us more accurately determine users’ emotions. Also, we would like to integrate a recommendation system via reinforcement learning by having the user input whether our recommendations improved their mood or not, so that we can more accurately prescribe recommendations as well. Lastly, we can also expand the APIs we use to allow for more recommendations.
|
partial
|
## 🧠 Inspiration
Non-fungible tokens (NFTs) are digital blockchain-linked assets that are completely unique and not interchangeable with any other asset. The market for NFTs has tripled in 2020, with the total value of transactions increasing by 299% year on year to more than $250m\*. Because they are unique and impossible to replicate, they can bridge the gap between the virtual and the physical assets, it is possible to tokenize art and prove ownership with the use of NFTs.
Our team wanted to design a platform to bring the value of social responsibility into this newly blooming industry, and increase the accessibility and knowledge about NFTs. Our platform enables artists to securely register and list their art on the Hedera network and sell them where 10% of every transaction goes to a charitable organization, specifically tied to a UN Sustainable development goal. We also wanted to add a social aspect to gamify the donation process.
We also wanted to use a blockchain technology with lower gas fees and more reliability to increase our user base.
## 🤖 What it does
Every user who joins with us has an account on the Hedera network created for them. They can list their art assets on the hedera network for an amount of their choosing valued in HBAR. Other users can see the marketplace and purchase ownership of the art. This can be which can be transferred to another wallet and into any form of cryptocurrency. Users can filter art by the UN SDG goal they are most passionate about or by charitable organization. Additionally, we list our top contributors to charities in our leaderboard which can be shared on social media to promote activity on our platform. Since we are using blockchain, every transaction is recorded and immutable, so users can trust that their donations are going to the right place.
## 🛠 How we built it
We used Sketch to design our application, a JS, HTML, and CSS frontend, and an Express.js backend.
We used the Hedera Hashgraph blockchain tokenization service, file service, account creation service, and transfer of assets functionality.
## ⚙️ Challenges we ran into
Time restraints caused difficulties connecting our express JS backend to our frontend. We are all new to developing with blockchain, and for some of us it is our first time learning about many of the core concepts related to the technology.
## 🏆 Accomplishments that we're proud of
We are proud of the amount we accomplished within the time that we had. Developing with Hedera for the first time was difficult but we were able to see our transactions live on the test net which was very rewarding and shows the potential for our application when it is complete.
## 💁♀️What we learned
Mainly what was new to us was the fundamentals of blockchain and how to develop on the hedera blockchain. Express.js was also fairly new to us.
## ☀️What's next for Cryptble
Enabling KYC using a KYC provider, connecting our backend, improving our UI/UX, achieving compliance, contacting charities and organizations to join our platform.
## 🧑🤝🧑 Team Members
Pegah#0002
SAk#9408
daimoths#3947
Sharif#9380
* according to a new study released by NonFungible.com
|
## Inspiration
NFTs or Non-Fungible Tokens are a new form of digital assets stored on blockchains. One particularly popular usage of NFTs is to record ownership of digital art. NFTs offer several advantages over traditional forms of art including:
1. The ledger of record which is a globally distributed database, meaning there is persistent, incorruptible verification of who is the actual owner
2. The art can be transferred electronically and stored digitally, saving storage and maintenance costs while simultaneously providing a memetic vehicle that can be seen by billions of people over the internet
3. Royalties can be programmatically paid out to the artist whenever the NFT is transferred between parties leading to more fair compensation and better funding for the creative industry
These advantages resulted in, [the total value of NFTs reaching $41 billion dollars at the end of 2021](https://markets.businessinsider.com/news/currencies/nft-market-41-billion-nearing-fine-art-market-size-2022-1). Clearly, there is a huge market for NFTs.
However, many people do not know the first thing about creating an NFT and the process can be quite technically complex. Artists often hire developers to help turn their art into NFTs and [businesses have been created merely to help create NFTs](https://synapsereality.io/services/synapse-new-nft-services/).
## What it does
SimpleMint is a web app that allows anyone to create an NFT with a few clicks of a button. All it requires is for the user to upload an image and give the NFT a name. Upon clicking ‘mint now’, an NFT is created with the image stored in IPFS and automatically deposited into the creator's blockchain wallet. The underlying blockchain is [Hedera](https://hedera.com/), which is a carbon negative, enterprise grade blockchain trusted by companies like Google and Boeing.
## How we built it
* React app
* IPFS for storage of uploaded images
* Hedera blockchain to create, mint, and store the NFTs
## Challenges we ran into
* Figuring out how to use the IPFS js-sdk to programatically store & retrieve image files
* Figuring out wallet authentication due to the chrome web store going down for the hashpack app which rendered the one click process to connect wallet useless. Had to check on Hedera’s discord to find an alternative solution
## Accomplishments that we're proud of
* Building a working MVP in a day!
## What we learned
* How IPFS works
* How to build on Hedera with the javascript SDK
## What's next for SimpleMint
We hope that both consumers and creators will be able to conveniently turn their images into NFTs to create art that will last forever and partake in the massive financial upside of this new technology.
|
## Inspiration
As part of our university tour, we went to a materials recovery facility (MRF). There we were able to fully appreciate the process of recycling and the whole supply chain from a household trashbin to a manufacturing mill. We also talked to a couple of people at the MRF, where we learnt that there are certain hurdles and costs that can be avoided if a proper system is implemented.
## What it does
The fact is, that there are brokers at every step of the way in the supply chain. The municipality has to **pay** these brokers to take the recyclables. After which the MRFs have to **buy** the recyclables from them. For the brokers, there is no *cost price* and are earning money from both parties. What if we can remove these brokers and streamline the whole process?
BEATS aims to embed blockchain into the advanced recycling value chain to provide a fully traceable and accurately labelled record of recycled materials, from waste sourcing to use in new production streams. This will provide all the stakeholders in the recycling industry with visibility of the provenance and quality of the materials entering and exiting their facilities. Municipalities can create auctions and the MRFs can compete against themselves to get the best price. Which is gonna be cheaper than paying a broker. MRFs then themselves create auctions of their own to sell their bales to the manufacturing mills. Thus we thought of minting an ERC721 non-fungible token for every record of these bales, which will be transferred from one party to the other, and is gonna be traceable. So that in the end, we can also find out how much of the waste item is actually going into the landfills
## How we built it
The backbone of the project is Ethereum, and for the auctions, we tried to use Axelar so that consumers with any cryptocurrency can do the dealing. For the front-end we used React and Firebase firestore and cloud functions for the backend. Authentication is also kind of a niche idea, in which we are not using any email or phone number for signing in to the user. Instead, we are using only the Metamask wallet. Whereupon signing a message(nonce) the user can be authenticated.
## Challenges we ran into
Using Axelar was certainly the toughest challenge since we had little to no experience dealing with cross-chain communications. And we still could not fully implement it due to time constraints and the vision of the project.
## Accomplishments that we're proud of
The fact that we were able to complete the Auctions along with a good UI.
## What we learned
Basics of cross-chain communication and time management
video : <https://drive.google.com/file/d/1tIB1BSGazD9gEIlMjcOybdIQLBvKrb4L/view>
|
partial
|
## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world.
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
## 💡Inspiration💡
According to statistics, hate crimes and street violence have exponentially increased and the violence does not end there. Many oppressed groups face physical and emotional racial hostility in the same way. These crimes harm not only the victims but also people who have a similar identity. Aside from racial identities, all genders reported feeling more anxious about exploring the outside environment due to higher crime rates. After witnessing an upsurge in urban violence and fear of the outside world, We developed Walk2gether, an app that addresses the issue of feeling unsafe when venturing out alone and fundamentally alters the way we travel.
## 🏗What it does🏗
It offers a remedy to the stress that comes with walking outside, especially alone. We noticed that incorporating the option of travelling with friends lessens anxiety, and has a function to raise information about local criminal activity to help people make informed travel decisions. It also provides the possibility to adjust settings to warn the user of specific situations and incorporates heat map technology that displays red alert zones in real-time, allowing the user to chart their route comfortably. Its campaign for social change is closely tied with our desire to see more people, particularly women, outside without being concerned about being aware of their surroundings and being burdened by fears.
## 🔥How we built it🔥
How can we make women feel more secure while roaming about their city? How can we bring together student travellers for a safer journey? These questions helped us outline the issues we wanted to address as we moved into the design stage. And then we created a website using HTML/CSS/JS and used Figma as a tool to prepare the prototype. We have Used Auth0 for Multifactor Authentication. CircleCi is used so that we can deploy the website in a smooth and easy to verify pipelining system. AssemblyAi is used for speech transcription and is associated with Twilio for Messaging and Connecting Friends for the journey to destination. Twilio SMS is also used for alerts and notification ratings. We have also used Coil for Membership using Web Based Monitization and also for donation to provide better safety route facilities.
## 🛑 Challenges we ran into🛑
The problem we encountered was the market viability - there are many safety and crime reporting apps on the app store. Many of them, however, were either paid, had poor user interfaces, or did not plan routes based on reported occurrences. Also, The challenging part was coming up with a solution because there were additional features that might have been included, but we only had to pick the handful that was most critical to get started with the product.
Also, Our team began working on the hack a day before the deadline, and we ran into some difficulties while tackling numerous problems. Learning how to work with various technology came with a learning curve. We have ideas for other features that we'd like to include in the future, but we wanted to make sure that what we had was production-ready and had a pleasant user experience first.
## 🏆Accomplishments that we're proud of: 🏆
We gather a solution to this problem and create an app which is very viable and could be widely used by women, college students and any other frequent walkers!
Also, We completed the front-end and backend within the tight deadlines we were given, and we are quite pleased with the final outcome. We are also proud that we learned so many technologies and completed the whole project with just 2 members on the team.
## What we learned
We discovered critical safety trends and pain points that our product may address. Over the last few years, urban centres have seen a significant increase in hate crimes and street violence, and the internet has made individuals feel even more isolated.
## 💭What's next for Walk2gether💭
Some of the features incorporated in the coming days would be addressing detailed crime mapping and offering additional facts to facilitate learning about the crimes happening.
|
winning
|
**## Inspiration**
According to the United Nations in 2013– Global forced displacements tops over fifty million for the first time since the second world war . The largest groups are from Afghanistan, Syria, and Somalia. A large percentage of the people leaving these war-torn countries apply to be refugees but sadly only a small percentage are accepted, this is because of the amount of people that can be accepted and the extensive time to get accepted to be refugee for. The processing time for refugees to come to Canada can take up to fifty-two months and multiple trips to the visa office that can be stationed countries away, interviews, and background checks.
As hackers it is our moral obligation to extend our hand to provide solutions for those in need. Therefore, investing our time and resources into making RefyouFree come to reality would substantially help the lives of individuals looking to refuge or asylum. With so many individuals that are experiencing the hardship of leaving everything behind in hopes for a better future they need all the assistance they can get. Children that are brought up in war-torn countries only know of war and believe that their reason for being alive is to either run away or join the war. There is so much more to life than just war and hardships and RefYouFree will support refugees while finding a better reason for being alive. With this mobile application, there is a way for individuals to message their families, call them, find refuge, and have real-time updates of what is happen around them.
**## What it does**
The purpose of RefyouFree – is to provide a faster, and a more convenient resource to allow those in need to apply to countries as refugee’s to start a better life. It would be end up to be a web + mobile application for iOS.
**## How we built it**
The iOS app was built using xCode, using Objective-C, and Swift. API's used for iOS were Sinch. For the Web app, AWS servers were used to run Ruby on Rails thats front end was written in HTML and CSS.
**## Challenges we ran into**
Challenges that the team and I ran into was how many refugee's had access to smart phones, a computer or the internet. After a bit of research we learned that in Syria, they have 87 phones to 100 people. We are hoping that if people do not have access to these resources that they go to the immigration offices, airports, or seaports where they could apply and hopefully get to use the app.
**## Accomplishments that I'm proud of**
Getting the code to run without an error is always an accomplishment.
**## What I learned**
Team work is key. Without such an amazing and dedicated team I do not believe we could have gotten so far. We come from different places, did not know each other until the hackathon but were able to put our heads together and got it to work! For the iOS developers we learned a ton about API intergration, as well as Swift. For Web Developers, they had learned a lot about server side, backend, frontend, and ruby of rails.
**## What's next for RefyouFree**
Working on this project is happening right after we get out of the hackathon. We have messaged various United Nation Members along with members part of the Canadian Immigration Office to see if we would be allowed to do this idea. Although the team met this weekend, it has a high compatibility and a great work ethic to get stuff done.
|
## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
|
## Inspiration
We all know that everyone in this country probably has a mobile phone, even the less-fortunate. We wanted to help them by developing something useful.
## What it does
Our team decided to make a web application which will help the less-fortunate connect with valuable resources, find shelters, addiction therapy centers, connect with appropriate help in a moment of crisis and help themselves by learning how to be financially stable, all between a few clicks of one another.
## How we built it
We built this using HTML,CSS and Bootstrap for the front end, and JavaScript, Firebase and the google map APIs for the back end. We were able to finish this project sooner than expected.
## Challenges we ran into
Struggles were met when developing the custom maps for the user. And since it was our first time working with Firebase and the DOM(Document Object Model), we experienced difficultlywith making the front and back end communicate with each other.
## Accomplishments that we're proud of
We were able to make a working model of our initial idea, and we also had time to develop a Flutter app which has similar functionalities as the web app.
## What we learned
We gained invaluable experience working with APIs and Firebase. We are now confident with using them in the future.
## What's next for UpHelping
|
winning
|
## Inspiration
Companies lack insight into their users, audiences, and marketing funnel.
This is an issue I've run into on many separate occasions. Specifically,
* while doing cold marketing outbound, need better insight onto key variables of successful outreach
* while writing a blog, I have no idea who reads it
* while triaging inbound, which users do I prioritize
Given a list of user emails, Cognito scrapes the internet finding public information about users and the companies they work at. With this corpus of unstructured data, Cognito allows you to extract any relevant piece of information across users. An unordered collection of text and images becomes structured data relevant to you.
## A Few Example Use Cases
* Startups going to market need to identify where their power users are and their defining attributes. We allow them to ask questions about their users, helping them define their niche and better focus outbound marketing.
* SaaS platforms such as Modal have trouble with abuse. They want to ensure people joining are not going to abuse it. We provide more data points to make better judgments such as taking into account how senior of a developer a user is and the types of companies they used to work at.
* VCs such as YC have emails from a bunch of prospective founders and highly talented individuals. Cognito would allow them to ask key questions such as what companies are people flocking to work at and who are the highest potential people in my network.
* Content creators such as authors on Substack looking to monetize their work have a much more compelling case when coming to advertisers with a good grasp on who their audience is.
## What it does
Given a list of user emails, we crawl the web, gather a corpus of relevant text data, and allow companies/creators/influencers/marketers to ask any question about their users/audience.
We store these data points and allow for advanced querying in natural language.
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
## How we built it
we orchestrated 3 ML models across 7 different tasks in 30 hours
* search results person info extraction
* custom field generation from scraped data
* company website details extraction
* facial recognition for age and gender
* NoSQL query generation from natural language
* crunchbase company summary extraction
* email extraction
This culminated in a full-stack web app with batch processing via async pubsub messaging. Deployed on GCP using Cloud Run, Cloud Functions, Cloud Storage, PubSub, Programmable Search, and Cloud Build.
## What we learned
* how to be really creative about scraping
* batch processing paradigms
* prompt engineering techniques
## What's next for Cognito
1. predictive modeling and classification using scraped data points
2. scrape more data
3. more advanced queries
4. proactive alerts
[video demo](https://www.loom.com/share/1c13be37e0f8419c81aa731c7b3085f0)
|
## Inspiration
With the ubiquitous and readily available ML/AI turnkey solutions, the major bottlenecks of data analytics lay in the consistency and validity of datasets.
**This project aims to enable a labeller to be consistent with both their fellow labellers and their past self while seeing the live class distribution of the dataset.**
## What it does
The UI allows a user to annotate datapoints from a predefined list of labels while seeing the distribution of labels this particular datapoint has been previously assigned by another annotator. The project also leverages AWS' BlazingText service to suggest labels of incoming datapoints from models that are being retrained and redeployed as it collects more labelled information. Furthermore, the user will also see the top N similar data-points (using Overlap Coefficient Similarity) and their corresponding labels.
In theory, this added information will motivate the annotator to remain consistent when labelling data points and also to be aware of the labels that other annotators have assigned to a datapoint.
## How we built it
The project utilises Google's Firestore realtime database with AWS Sagemaker to streamline the creation and deployment of text classification models.
For the front-end we used Express.js, Node.js and CanvasJS to create the dynamic graphs. For the backend we used Python, AWS Sagemaker, Google's Firestore and several NLP libraries such as SpaCy and Gensim. We leveraged the realtime functionality of Firestore to trigger functions (via listeners) in both the front-end and back-end. After K detected changes in the database, a new BlazingText model is trained, deployed and used for inference for the current unlabeled datapoints, with the pertinent changes being shown on the dashboard
## Challenges we ran into
The initial set-up of SageMaker was a major timesink, the constant permission errors when trying to create instances and assign roles were very frustrating. Additionally, our limited knowledge of front-end tools made the process of creating dynamic content challenging and time-consuming.
## Accomplishments that we're proud of
We actually got the ML models to be deployed and predict our unlabelled data in a pretty timely fashion using a fixed number of triggers from Firebase.
## What we learned
Clear and effective communication is super important when designing the architecture of technical projects. There were numerous times where two team members were vouching for the same structure but the lack of clarity lead to an apparent disparity.
We also realized Firebase is pretty cool.
## What's next for LabelLearn
Creating more interactive UI, optimizing the performance, have more sophisticated text similarity measures.
|
## Inspiration
Survival from out-of-hospital cardiac arrest remains unacceptably low worldwide, and it is the leading cause of death in developed countries. Sudden cardiac arrest takes more lives than HIV and lung and breast cancer combined in the U.S., where survival from cardiac arrest averages about 6% overall, taking the lives of nearly 350,000 annually. To put it in perspective, that is equivalent to three jumbo jet crashes every single day of the year.
For every minute that passes between collapse and defibrillation survival rates decrease 7-10%. 95% of cardiac arrests die before getting to the hospital, and brain death starts 4 to 6 minutes after the arrest.
Yet survival rates can exceed 50% for victims when immediate and effective cardiopulmonary resuscitation (CPR) is combined with prompt use of a defibrillator. The earlier defibrillation is delivered, the greater chance of survival. Starting CPR immediate doubles your chance of survival. The difference between the current survival rates and what is possible has given rise to the need for this app - IMpulse.
Cardiac arrest can occur anytime and anywhere, so we need a way to monitor heart rate in realtime without imposing undue burden on the average person. Thus, by integrating with Apple Watch, IMpulse makes heart monitoring instantly available to anyone, without requiring a separate device or purchase.
## What it does
IMpulse is an app that runs continuously on your Apple Watch. It monitors your heart rate, detecting for warning signs of cardiac distress, such as extremely low or extremely high heart rate. If your pulse crosses a certain threshold, IMpulse captures your current geographical location and makes a call to an emergency number (such as 911) to alert them of the situation and share your location so that you can receive rapid medical attention. It also sends SMS alerts to emergency contacts which users can customize through the app.
## How we built it
With newly-available access to Healthkit data, we queried heart sensor data from the Apple Watch in real time. When these data points are above or below certain thresholds, we capture the user's latitude and longitude and make an HTTPRequest to a Node.js server endpoint (currently deployed to heroku at <http://cardiacsensor.herokuapp.com>) with this information. The server uses the Google Maps API to convert the latitude and longitude values into a precise street address. The server then makes calls to the Nexmo SMS and Call APIs which dispatch the information to emergency services such as 911 and other ICE contacts.
## Challenges we ran into
1. There were many challenges testing the app through the XCode iOS simulators. We couldn't find a way to simulate heart sensor data through our laptops. It was also challenging to generate Location data through the simulator.
2. No one on the team had developed in iOS before, so learning Swift was a fun challenge.
3. It was challenging to simulate the circumstances of a cardiac arrest in order to test the app.
4. Producing accurate and precise geolocation data was a challenge and we experimented with several APIs before using the Google Maps API to turn latitude and longitude into a user-friendly, easy-to-understand street address.
## Accomplishments that we're proud of
This was our first PennApps (and for some of us, our first hackathon). We are proud that we finished our project in a ready-to-use, demo-able form. We are also proud that we were able to learn and work with Swift for the first time. We are proud that we produced a hack that has the potential to save lives and improve overall survival rates for cardiac arrest that incorporates so many different components (hardware, data queries, Node.js, Call/SMS APIs).
## What's next for IMpulse
Beyond just calling 911, IMpulse hopes to build out an educational component of the app that can instruct bystanders to deliver CPR. Additionally, with the Healthkit data from Apple Watch, IMpulse could expand to interact with a user's pacemaker or implantable cardioverter defibrillator as soon as it detects cardiac distress. Finally, IMpulse could communicate directly with a patient's doctor to deliver realtime heart monitor data.
|
winning
|
## Inspiration
At Lingulink, we understand the struggles of language learning first-hand. Our team, which includes three English second language learners, recognizes the difficulties that may arise in language learning endeavors and the lack of resources available to individuals in underfunded institutions. We are dedicated to creating a meaningful and impactful language learning experience that provides users with the opportunity to make a difference in the world. By pairing users with ESL students, Lingulink provides users with the unique opportunity to apply their language skills practically and purposefully, while also making a tangible impact on the lives of others through 1-1 mentorship. Through promoting a collaborative and interactive learning experience that can enhance retention and understanding of language concepts, we hope to bridge the gap between language learners of all levels and endow students to reach their full potential.
## What it does
Research has shown that people who teach others are more successful at actually learning and put more effort into their learning process. This app capitalizes on this concept by providing users with the opportunity to teach a child while learning a new language, creating a meaningful and impactful experience that promotes language learning and social impact. By focusing on this unique and research-backed approach, our language-learning app aims to disrupt traditional language-learning methods, which often rely on memorization and rote learning. While there are many language-learning apps and services available, few, if any, offer the same level of real-world, practical application that our app provides.
LinguLink is designed to revolutionize how people learn languages by offering a meaningful and impactful experience. By pairing users with underrepresented children who speak the language they are learning, the app creates a unique opportunity for users to apply their language skills practically and purposefully, while also making a tangible impact on the lives of others.
The app uses state-of-the-art language teaching techniques to provide a comprehensive learning experience that covers grammar, syntax, basic words, and other relevant language skills. Using machine learning and artificial intelligence, the app adapts to each user's learning style and progress, providing a tailored learning experience that is both engaging and effective. By incorporating gamification elements, such as challenges and rewards, and customizable learning paths, the app makes language learning more interactive and enjoyable.
However, what truly sets this app apart is the opportunity for users to apply what they have learned through 1-1 mentorship sessions with underrepresented children. Through these sessions, users can help bridge the gap between language learners of all levels and endow students to reach their full potential. Say you want to learn Spanish. After taking an assessment test, you are directed to a personalized lesson catered towards areas that you need to work on. Upon completion of all the modules, you have the option to take a Spanish proficiency test and once you pass, you will be paired with an ESL student from a lower-income community, who is learning English. You can then use your newfound knowledge to help mentor them and help them reach their learning goals. Not only does this help the student, but it also helps build your own confidence and helps make language learning more meaningful.
## How we built it
For our backend, we utilized flask for our server, and firebase’s database to store questions to give out to the user. We used firebase because it was extremely easy to add or remove questions relatively quickly and make changes to it. For front-end code used Html/CSS and Javascript to build the design of the website.
## Challenges we ran into
One of the challenges we ran into was determining the appropriate technology stack for the website. Integrating the different technologies we were using to build the website, such as front-end frameworks, back-end languages, and hosting services. This required us to learn new technologies and troubleshoot issues that arose during the development process.
Another challenge we faced was determining the architecture and design of the website. This includes deciding on the layout, user interface, and user experience. Here, creating a functional and visually appealing website that is easy to navigate can be difficult, since we had very limited experience in UX Design and using apps for prototyping such as Figma. Likewise, for our MVP (minimum viable product),
we had to make some tough decisions on what features to prioritize and what to leave out due to time constraints. This required careful planning and communication within the team to ensure we were on the same page and making the most efficient use of our time and resources.
## Accomplishments that we're proud of
We are proud of the fact that we were able to get a working demo complete after all the errors we had been facing. From developing the prototype through Figma to developing a working product,
we have come a long way and have learned a lot throughout the process. We are proud of the fact that we came together as a team and worked together to create a product that we are all proud of.
We were also proud of how well our team worked together at communicating and dividing up the shared work between members.
## What we learned
During this Hackathon, we discovered the importance of questioning our idea and being open to feedback and suggestions from others. While our idea of a language-learning app that pairs users with underrepresented children was promising, we recognized the need to refine and improve the idea as they continued to develop it. Through discussions with interviewers at YCombinator and brainstorming sessions, we were able to identify potential areas of weakness and ways to address them, resulting in a stronger and more well-rounded concept. Furthermore, we learned the value of seeking help and guidance from others, especially in areas where we lacked experience such as UX Design. As we encountered challenges in determining the appropriate technology stack for their website or designing a user-friendly interface, we sought out resources and mentors to assist us and this willingness to learn and ask for help ultimately led to a more successful and polished final product.
Overall, we learned about the importance of remaining open-minded and adaptable, as well as the benefits of seeking feedback and assistance from others. By embracing a growth mindset and a willingness to learn and improve through taking criticism, we were able to overcome challenges and develop a strong and impactful product.
## What's next for LinguLink
**Virtual language immersion:** By using augmented or virtual reality, LinguLink could create virtual environments that mimic real-world situations where learners can practice their language skills, such as ordering food at a restaurant, navigating a city, or engaging in conversations with native speakers. This could make the language learning experience more immersive and engaging. Likewise, AR/VR can immerse learners in the culture of the language they are studying, allowing them to explore famous landmarks, museums, and cultural events. This could help learners gain a deeper appreciation of the culture behind the language.
\**Introducing live, in-app language classes: \** While LinguLink already offers real-world language practice through its language partner feature, adding live, in-app language classes could provide a more structured and comprehensive learning experience, which could appeal to users who prefer a more traditional learning approach.
**Offering specialized language courses:** To cater to learners with specific language learning needs, LinguLink could consider offering specialized courses, such as business language, medical language, or legal language.
## Ethics
When contemplating the issue, it is worth noting that our team comprised three second-language learners of English (ESL). However, only one of these individuals was fortunate enough to have undergone an ESL (English as a Second Language) program. At the same time, the remaining members experienced significant difficulties in their language-learning endeavors. Though given at hand when sharing their experiences as ESL learners, one commonality highlighted amongst the three of us was the failure of resources, that is, the lack of mentorship given underfunded institutions in which we resided. Likewise, the same member who had undergone the program was now a researcher working in AI Ethics working on autonomy, where this principle upholds an individual's right to make their own decisions about their health and well-being.
With LinguLink, we recognize the importance of upholding ethical standards and prioritizing the safety and well-being of all participants, especially when pairing minors with adult language learners. As such, we prioritize confidentiality and informed consent, which are crucial in respecting users' autonomy and ensuring they control their personal information and decision-making processes. By providing transparency about the program and its requirements, we aim to empower users to make informed decisions about their participation in LinguLink. Our ultimate goal is to create a safe and supportive environment where users can engage in language learning with confidence and trust.
However, predictive systems are being used in ways that raise issues of fairness and self-freedom because they make predictions based on information provided by individuals. They can perpetuate biases and might result in unexpected outcomes that the users cannot control. Contemporary research on the ethics of involving minors in online platforms has revealed the imperative for strict safeguards to preserve the rights and welfare of children, such as informed consent and data privacy (Díaz-Pérez et al., 2020). The use of algorithms and machine learning models in decision-making processes that impact children, including educational assessments or child welfare interventions, can raise ethical concerns regarding transparency, accountability, and fairness in the development and deployment of these systems to avert potential harm or discrimination.
In the context of Lingulink, we acknowledge that our approach utilizing gamification and machine learning algorithms to individualize and stimulate language acquisition for learners has limitations, the most prominent of which is algorithmic bias. For example, it has been demonstrated that AI-based platforms can display gender and race biases, even though they are incorporated into the algorithm without explicit intention (Akgun, Greenhow 2021). For example, many Natural Language Models and translating applications such as Google Translate continue to reinforce societal stereotypes (e.g., gender-based labels in different languages) or misuse language (e.g., translating "she" to "he") (Prates et al., 2019).
Additionally, a study by the American Psychology Association shows that one retains information better when expected to teach it to others, calling it The Protégé Effect (Muis et al., 2016). Students lack the skills to conceptualize and solve complex problems resulting in lower scores than those possessing the skills. With that being said, by providing a platform where users can engage in language learning with a partner, LinguLink aims to promote a collaborative and interactive learning experience that can enhance retention and understanding of language concepts.
Furthermore, LinguLink acknowledges that some language learners may need more skills to conceptualize and solve complex problems, resulting in lower scores than those with these skills. By providing a supportive and inclusive learning environment, LinguLink aims to address this disparity and provide opportunities for mentorship and guidance to help students develop these skills.
*\*\*Addressing these ethical implications.*\* We understand the ethical implications of emphasizing autonomy and providing users with the right to decide about their personal information and decision-making processes. To address the ethical implications of the system, we are committed to the following:
1. Providing transparency in the program and its requirements empowers users to make informed decisions about their participation. Inspired by Intel's Corporate Responsibility Report, we aim to be transparent about our ethical principles and how they are integrated into our platform's design and operation.
2. Prioritizing confidentiality and informed consent to ensure users are in control of their personal information. We will fully adhere to the General Data Protection Regulation law and only maintain and store necessary personal data.
3. Ensuring fairness and non-discrimination in developing and deploying our algorithms and machine learning models. To mitigate this risk, LinguLink will take a multi-faceted approach. First, we will regularly review and monitor the performance of our algorithms and machine learning models to detect any biases or discriminatory outcomes. This will involve testing the accuracy of the models across different demographic groups and investigating any disparities in performance. Second, we will utilize diverse datasets to train our algorithms and machine learning models. We can reduce the risk of reinforcing or amplifying societal biases by incorporating various data. We will also strive to ensure that our training data is inclusive and representative of the global population to minimize the risk of inadvertently creating a biased model. Third, we will incorporate interpretability and explainability into our models, allowing us to understand how they arrive at their predictions and identify potential sources of bias (Lipton, 2018).
**Works Cited**
Akgun, and Greenhow. "Artificial Intelligence in Education: Addressing Ethical Challenges in K-12 Settings." AI and Ethics, vol. 2, no. 3, Sept. 2021, pp. 431–40, doi:10.1007/s43681-021-00096-7.
APA PsycNet. <https://psycnet.apa.org/fulltext/2015-38251-001.html>. Accessed 18 Feb. 2023.
Díaz-Pérez, et al. "Moral Structuring of Children during the Process of Obtaining Informed Consent in Clinical and Research Settings." BMC Medical Ethics, vol. 21, no. 1, Nov. 2020, pp. 1–10, doi:10.1186/s12910-020-00540-z.
Milanesi, Carolina. "Intel Underlines Transparency And Accountability By Sharing Diversity And Inclusion Raw Data." Forbes, 16 May 2022, <https://www.forbes.com/sites/carolinamilanesi/2022/05/16/intel-underlines-transparency-and-accountability-by-sharing-diversity-and-inclusion-raw-data/?sh=8d1a78440eb2>.
Prates et al. "Assessing Gender Bias in Machine Translation: A Case Study with Google Translate." Neural Computing and Applications, vol. 32, no. 10, Mar. 2019, pp. 6363–81, doi:10.1007/s00521-019-04144-6.
The Protégé Effect: How You Can Learn by Teaching Others – Effectiviology. <https://effectiviology.com/protege-effect-learn-by-teaching/>. Accessed 18 Feb. 2023.
ACM Digital Library, <https://dl.acm.org/doi/pdf/10.1145/3236386.3241340>. Accessed 19 Feb. 2023.
|
## Inspiration### Background
Growing up, multiple group members struggled with communication as second-generation immigrants. Torn between trying to learn English while maintaining their native tongue, there has been a constant theme of linguistic barriers surrounding miscommunication in life.
### Mission
We are looking to vastly improve the language learning process. We aim to eliminate the tedious dialect learning process by:
* Reducing user input
* Improving user experience
* Integrating language learning into everyday life
### technology
We began with a machine learning model using multiple Python libraries, including:
* **TensorFlow**
* **OpenCV**
* **Mediapipe**
* **NumPy**
Our main feature is live-video glasses, which allow users to point at any object and receive translations in a language of their choice. The translations can be outputted via text-to-speech or through our front-end mobile app.
|
## Inspiration
We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences.
## What it does
LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study.
## How we built it
We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis.
After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user.
We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference.
## Challenges we ran into
* Getting the eye-tracker to be properly calibrated (it was always a bit off than our view)
* Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!)
## Accomplishments that we're proud of
* Hearing the first audio playback of a translation was exciting
* Seeing the system work completely hands free while walking around the event venue was super cool!
## What we learned
* we learned about how to work within the limitations of the eye tracker
## What's next for LingoVision
One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary.
Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly.
|
losing
|
## What it does
Danstrument lets you video call your friends and create music together using only your actions. You can start a call which generates a code that your friend can use to join.
## How we built it
We used Node.js to create our web app which employs WebRTC to allow video calling between devices. Movements are tracked with pose estimation from tensorflow and then vector calculations are done to trigger audio files.
## Challenges we ran into
Connecting different devices with WebRTC over an unsecured site proved to be very difficult. We also wanted to have continuous sound but found that libraries that could accomplish this caused too many problems so we chose to work with discrete sound bites instead.
## What's next for Danstrument
Annoying everyone around us.
|
## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.

Tracks movement of hands and face to provide real-time analysis on expressions and body-language.

|
## Inspiration
What inspired the beginning of the idea was terrible gym music and the thought of a automatic music selection based on the tastes of people in the vicinity. Our end goal is to sell a hosting service to play music that people in a local area would want to listen to.
## What it does
The app has two parts. A client side connects to Spotify and allows our app to collect users' tokens, userID, email and the top played songs. These values are placed inside a Mongoose database and the userID and top songs are the main values needed. The host side can control the location and the radius they want to cover. This allows the server to be populated with nearby users and their top songs are added to the host accounts playlist. The songs most commonly added to the playlist have a higher chance of being played.
This app could be used at parties to avoid issues discussing songs, retail stores to play songs that cater to specific groups, weddings or all kinds of social events. Inherently, creating an automatic DJ to cater to the tastes of people around an area.
## How we built it
We began by planning and fleshing out the app idea then from there the tasks were split into four sections: location, front end, Spotify and database. At this point we decided to use React-Native for the mobile app and NodeJS for the backend was set into place. After getting started the help of the mentors and the sponsors were crucial, they showed us all the many different JS libraries and api's available to make life easier. Programming in Full Stack MERN was a first for everyone in this team. We all hoped to learn something new and create an something cool.
## Challenges we ran into
We ran into plenty of problems. We experienced many syntax errors and plenty of bugs. At the same time dependencies such as Compatibility concerns between the different APIs and libraries had to be maintained, along with the general stress of completing on time. In the end We are happy with the product that we made.
## Accomplishments that we are proud of
Learning something we were not familiar with and being able to make it this far into our project is a feat we are proud of. .
## What we learned
Learning about the minutia about Javascript development was fun. It was because of the mentors assistance that we were able to resolve problems and develop at a efficiently so we can finish. The versatility of Javascript was surprising, the ways that it is able to interact with and the immense catalog of open source projects was staggering. We definitely learned plenty... now we just need a good sleep.
## What's next for SurroundSound
We hope to add more features and see this application to its full potential. We would make it as autonomous as possible with seamless location based switching and database logging. Being able to collect proper user information would be a benefit for businesses. There were features that did not make it into the final product, such as voting for the next song on the client side and the ability for both client and host to see the playlist. The host would have more granular control such as allowing explicit songs, specifying genres and anything that is accessible by the Spotify API. While the client side can be gamified to keep the GPS scanning enabled on their devices, such as collecting points for visiting more areas.
|
winning
|
## Inspiration
Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans.
## What it does
Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise.
## How we built it
At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data.
We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync.
Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase.
## Challenges we ran into
One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it.
## What's next for phys.io
<https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
|
## Inspiration
The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive.
## What it does and how we built it
Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor.
## How we built it
We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves.
## Challenges we ran into
Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data.
## Accomplishments that we're proud of
We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications.
## What we learned
It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing.
## What's next for SpotMe
In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too).
The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete.
For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady.
|
## Inspiration
One of our team members, Aditya, has been in physical therapy (PT) for the last year after a wrist injury on the tennis court. He describes his experience with PT as expensive and inconvenient. Every session meant a long drive across town, followed by an hour of therapy and then the journey back home. On days he was sick or traveling, he would have to miss his PT sessions.
Another team member, Adarsh, saw his mom rushed to the hospital after suffering from a third degree heart block. In the aftermath of her surgery, in which she was fitted with a pacemaker, he noticed how her vital signs monitors, which were supposed to aid in her recovery, inhibited her movement and impacted her mental health.
These insights together provided us with the inspiration to create TherapEase.ai. TherapEase.ai uses AI-enabled telehealth to bring **affordable and effective PT** and **contactless vital signs monitoring services** to consumers, especially among the **elderly and disabled communities**. With virtual sessions, individuals can receive effective medical care from home with the power of pose correction technology and built-in heart rate, respiratory, and Sp02 monitoring. This evolution of telehealth flips the traditional narrative of physical development—the trainee can be in more control of their body positioning, granting them greater levels of autonomy.
## What it does
The application consists of the following features:
Pose Detection and Similarity Tracking
Contactless Vital Signs Monitoring
Live Video Feed with Trainer
Live Assistant Trainer Chatbot
Once a PT Trainer or Medical Assistant creates a specific training room, the user is free to join said room. Immediately, the user’s body positioning will be highlighted and compared to that of the trainer. This way the user can directly mimic the actions of the trainer and use visual stimuli to better correct their position. Once the trainer and the trainee are aligned, the body position highlights will turn blue, indicating the correct orientation has been achieved.
The application also includes a live assistant trainer chatbot to provide useful tips for the user, especially when the user would like to exercise without the presence of the trainer.
Finally, on the side of the video call, the user can monitor their major vital signs: heart rate, respiratory rate, and blood oxygen levels without the need for any physical sensors or wearable devices. All three are estimated using remote Photoplethysmography: a technique in which fluctuations in camera color levels are used to predict physiological markers.
## How we built it
We began first with heart rate detection. The remote Photoplethysmography (rPPG) technique at a high level works by analyzing the amount of green light that gets absorbed by the face of the trainee. This serves as a useful proxy as when the heart is expanded, there is less red blood in the face, which means there is less green light absorption. The opposite is true when the heart is contracted. By magnifying these fluctuations using Eulerian Video Magnification, we can then isolate the heart rate by applying a Fast Fourier Transform on the green signal.
Once the heart rate detection software was developed, we integrated in PoseNet’s position estimation algorithm, which draws 17 key points on the trainee in the video feed. This lead to the development of two-way video communication using webRTC, which simulates the interaction between the trainer and the trainee. With the trainer’s and the trainee’s poses both being estimated, we built the weighted distance similarity comparison function of our application, which shows clearly when the user matched the position of the trainer.
At this stage, we then incorporated the final details of the application: the LLM assistant trainer and the additional vital signs detection algorithms. We integrated **Intel’s Prediction Guard**, into our chat bot to increase speed and robustness of the LLM. For respiratory rate and blood oxygen levels, we integrated algorithms that built off of rPPG technology to determine these two metrics.
## Challenges we ran into (and solved!)
We are particularly proud of being able to implement the two-way video communication that underlies the interaction between a patient and specialist on TherapEase.ai. There were many challenges associated with establishing this communication. We spent many hours building an understanding of webRTC, web sockets, and HTTP protocol. Our biggest ally in this process was the developer tools of Chrome, which we could use to analyze network traffic and ensure the right information is being sent.
We are also proud of the cosine similarity algorithm which we use to compare the body pose of a specialist/trainer with that of a patient. A big challenge associated with this was finding a way to prioritize certain points (from posnet) over others (e.g. an elbow joint should be given more importance than an eye point in determining how off two poses are from each other). After hours of mathematical and programming iteration, we devised an algorithm that was able to weight certain joints more than others leading to much more accurate results when comparing poses on the two way video stream. Another challenge was finding a way to efficiently compute and compare two pose vectors in real time (since we are dealing with a live video stream). Rather than having a data store, for this hackathon we compute our cosine similarity in the browser.
## What's next for TherapEase.ai
We all are very excited about the development of this application. In terms of future technical developments, we believe that the following next steps would take our application to the next level.
* Peak Enhancement for Respiratory Rate and SpO2
* Blood Pressure Contactless Detection
* Multi-channel video Calling
* Increasing Security
|
winning
|
## Discord Team Number: 18
## Discord Usernames: mle#4507, Cat#6312, Joeldcross#9466
## Inspiration
Ensuring donated clothing gets to those who need it :)
## What it does
We partner with donation centres in your community to report their most needed clothing items. We provide all the necessary information to connect you to them.
## How I built it
* front-end: bootstrap, HTML, CSS, php
* back-end: SQL, javascript, php, Google Maps API, Google Places API
* collaborated with team over GitHub
## Challenges I ran into
* not knowing how to use SQL database
* lack of information found on potential partner websites
* hardships of following Google API documentation
## Accomplishments that I'm proud of
* created a working website in 36 hours!
* learned and implemented a lot of new skills
## What I learned
* SQL, queries, Bootstrap, PHP, etc.
## What's next for weshare
* Gain partnerships in our city and beyond
* Build an interface so our partners can easily update their own information
* Expand our platform in the form of a mobile application
|
## Inspiration
With the spread of the COVID-19 pandemic, many high risk individuals are unable to go out and purchase essential goods such as groceries or healthcare products. Our app aims to be a simple, easy-to-use solution to facilitate group buys of such goods amongst small communities and neighbourhoods. Group purchasing of essential goods not only limits the potential spread of COVID-19, but also saves waste and limits GHG emissions.
## What it does
At it's core, the app is meant to promote small communities to participate and host group purchases of essential goods within themselves. Users input wanted items into a list that everyone in the community can see, and then a designated purchaser uses this list to purchase items for the neighbourhood.
## How we built it
On the front-end, we used Android Studio alongside Java to create a simple UI for users to input their desired purchases into lists that everyone in the group/neighbourhood can add to. With respect to the back-end, we used Mongoose for schema validation, MongoDB Atlas to host our database, Express for routing, and developed a custom-made authentication module for protecting our endpoints. Finally Postman was used to test and debug endpoints, and AWS to host the server.
## Challenges we ran into
Our team was completely new to nearly every aspect of this project. We had little experience in database management and user authentication, and next to none in mobile development. It took us about 5 hours just to get our environments setup and get up to speed on the technologies we chose to use.
When it came to authenticating users, we had lots of trouble getting Google Authentication to work. We sunk a lot of time into this issue and finally decided to develop a novel authentication methodology of our own.
## Accomplishments that we're proud of
We're incredibly humbled to have learned so much in such little time. We chose this project because we felt we would be challenged as software developers in our choice of technologies and implementation. Nearly every single technology used in this project was completely new to each of us, and we feel like we learned a lot of new things, such as how to use Android Studio, developing a custom API overnight, and the principles of user authentication.
We're also proud of having come into the hackathon with an initial idea and being able to pivot quickly in another direction, scrapping our original idea in favor of grocerWE.
## What we learned
We learned that mobile development can be rewarding and principles of software construction learned in junior courses were invaluable to the creation of this project. Additionally, we also learned how important user authentication is and how it's prevalent in nearly all of our apps that we use today. Creating this app also helped us realize the impact of technology on society today, and how a simple idea can help unite people together in a global pandemic.
## What's next for grocerWE
Given the time constraints of the hackathon, and how inexperienced our group was to these new technologies, there are many things that we wanted for grocerWE that we weren't quite able to implement.
We'd like to be able to add Google Maps integration, where users are able to add their address to their profile in order to make delivery of groceries easier on the designated purchaser. Additionally, user roles such as purchaser or orderer were not really implemented.
For the above reasons, we considered these issues out of scope and focused our time on other fundamental aspects of grocerWE.
|
## Inspiration
One day, one of our teammates was throwing out garbage in his apartment complex and the building manager made him aware that certain plastics he was recycling were soft plastics that can't be recycled.
According to a survey commissioned by Covanta, “2,000 Americans revealed that 62 percent of respondents worry that a lack of knowledge is causing them to recycle incorrectly (Waste360, 2019).” We then found that knowledge of long “Because the reward [and] the repercussions for recycling... aren’t necessarily immediate, it can be hard for people to make the association between their daily habits and those habits’ consequences (HuffingtonPost, 2016)”.
From this research, we found that lack of knowledge or awareness can be detrimental to not only to personal life, but also to meeting government societal, environmental, and sustainability goals.
## What it does
When an individual is unsure of how to dispose of an item, "Bin it" allows them to quickly scan the item and find out not only how to sort it (recycling, compost, etc.) but additional information regarding potential re-use and long-term impact.
## How I built it
After brainstorming before the event, we built it by splitting roles into backend, frontend, and UX design/research. We concepted and prioritized features as we went based on secondary research, experimenting with code, and interviewing a few hackers at the event about recycling habits.
We used Google Vision API for the object recognition / scanning process. We then used Vue and Flask for our development framework.
## Challenges I ran into
We ran into challenges with deployment of the application due to . Getting set up was a challenge that was slowly overcome by our backend developers getting the team set up and troubleshooting.
## Accomplishments that I'm proud of
We were able to work as a team towards a goal, learn, and have fun! We were also able work with multiple Google API's. We completed the core feature of our project.
## What I learned
Learning to work with people in different roles was interesting. Also designing and developing from a technical stand point such as designing for a mobile web UI, deploying an app with Flask, and working with Google API's.
## What's next for Bin it
We hope to review feedback and save this as a great hackathon project to potentially build on, and apply our learnings to future projects,
|
losing
|
Duet's music generation revolutionizes how we approach music therapy. We capture real-time brainwave data using Emotiv EEG technology, translating it into dynamic, personalized soundscapes live. Our platform, backed by machine learning, classifies emotional states and generates adaptive music that evolves with your mind. We are all intrinsically creative, but some—whether language or developmental barriers—struggle to convey it. We’re not just creating music; we’re using the intersection of art, neuroscience, and technology to let your inner mind shine.
## About the project
**Inspiration**
Duet revolutionizes the way children with developmental disabilities—approximately 1 in 10 in the United States—express their creativity through music by harnessing EEG technology to translate brainwaves into personalized musical experiences.
Daniel and Justin have extensive experience teaching music to children, but working with those who have developmental disabilities presents unique challenges:
1. Identifying and adapting resources for non-verbal and special needs students.
2. Integrating music therapy principles into lessons to foster creativity.
3. Encouraging improvisation to facilitate emotional expression.
4. Navigating the complexities of individual accessibility needs.
Unfortunately, many children are left without the tools they need to communicate and express themselves creatively. That's where Duet comes in. By utilizing EEG technology, we aim to transform the way these children interact with music, giving them a voice and a means to share their feelings.
At Duet, we are committed to making music an inclusive experience for all, ensuring that every child—and anyone who struggles to express themselves—has the opportunity to convey their true creative self!
**What it does:**
1. Wear an EEG
2. Experience your brain waves as music! Focus and relaxation levels will change how fast/exciting vs. slow/relaxing the music is.
**How we built it:**
We started off by experimenting with Emotiv’s EEGs — devices that feed a stream of brain wave activity in real time! After trying it out on ourselves, the CalHacks stuffed bear, and the Ariana Grande cutout in the movie theater, we dove into coding. We built the backend in Python, leveraging the Cortex library that allowed us to communicate with the EEGs. For our database, we decided on SingleStore for its low latency, real-time applications, since our goal was to ultimately be able to process and display the brain wave information live on our frontend.
Traditional live music is done procedurally, with rules manually fixed by the developer to decide what to generate. On the other hand, existing AI music generators often generate sounds through diffusion-like models and pre-set prompts. However, we wanted to take a completely new approach — what if we could have an AI be a live “composer”, where it decided based on the previous few seconds of live emotional data, a list of available instruments it can select to “play”, and what it previously generated to compose the next few seconds of music? This way, we could have live AI music generation (which, to our knowledge, does not exist yet). Powered by Google’s Gemini LLM, we crafted a prompt that would do just that — and it turned out to be not too shabby!
To play our AI-generated scores live, we used Sonic Pi, a Ruby-based library that specializes in live music generation (think DJing in code). We fed this and our brain wave data to a frontend built in Next.js to display the brain waves from the EEG and sound spectrum from our audio that highlight the correlation between them.
**Challenges:**
Our biggest challenge was coming up with a way to generate live music with AI. We originally thought it was impossible and that the tech wasn’t “there” yet — we couldn’t find anything online about it, and even spent hours thinking about how to pivot to another idea that we could use our EEGs with.
However, we eventually pushed through and came up with a completely new method of doing live AI music generation that, to our knowledge, doesn’t exist anywhere else! It was most of our first times working with this type of hardware, and we ran into many issues with getting it to connect properly to our computers — but in the end, we got everything to run smoothly, so it was a huge feat for us to make it all work!
**What’s next for Duet?**
Music therapy is on the rise – and Duet aims to harness this momentum by integrating EEG technology to facilitate emotional expression through music. With a projected growth rate of 15.59% in the music therapy sector, our mission is to empower kids and individuals through personalized musical experiences. We plan to implement our programs in schools across the states, providing students with a unique platform to express their emotions creatively. By partnering with EEG companies, we’ll ensure access to the latest technology, enhancing the therapeutic impact of our programs. Duet gives everyone a voice to express emotions and ideas that transcend words, and we are committed to making this future a reality!
**Built with:**
* Emotiv EEG headset
* SingleStore real-time database
* Python
* Google Gemini
* Sonic Pi (Ruby library)
* Next.js
|
## Inspiration
Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers.
## What it does
We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy.
## How we built it
Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus.
## Challenges we ran into
The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift.
## Accomplishments that we're proud of
We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others!
## What we learned
This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity.
## What's next for Alternative Physical Treatment
We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis.
## Business Model Canvas
<https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO>
|
## Inspiration
Like most of the hackathons, we were initially thinking of making another Software Hack using some cool APIs. But our team came to a mutual conclusion that we need to step outside our comfort zone and make a Hardware Hack this time. While browsing through the hardware made available to us, courtesy of MLH, we came across the Myo Gesture Control Armband and decided to use it for our hack.
While playing around with it and observing the motion control it gives us, we thought nothing would be cooler than recreating a classic endless runner video game, but with motion control! And here we have the final result - Neon
## What it does
Its a classic endless runner video game based on the 80s retro theme with some twists. The player puts the Myo Armband and controls a bike with their arm gestures. The controls are as following -
1) Double Tap Gesture - Unlock the armband
2) Spread Fingers - Start the game
3) Hover Arm Right - Move the biker towards right
4) Hover Arm Left - Move the biker towards left
5) Rock your Arm Up - Shoot bullets
## How we built it
We use an abstraction of WebGL called three.js to code our game. For integrating Myo Armband gestures we used a nice javascript binding made available on GitHub by *thalmiclabs* called myo.js. We also used NodeJS, ExpressJS and python HTTPServer to serve off our static files.
## Challenges we ran into
Integrating the game logic with Myo gestures was one of the hardest challenges. Putting together two completely different APIs and making them work is always challenging but fun. Mapping even the slightest gestures to add precise control took us hours but it was necessary for good user experience.
## Accomplishments that we're proud of
We have a working in-browser video game!
## What we learned
None of us had ever done a hardware hack before. So we are proud that we have a working hardware hack this time. We also learned WebGL for the first time so that was definitely challenging and fun at the same time.
## What's next for Neon
We would like to make it a multiplayer game that users can play with friends. Eventually, we would add more arenas, difficulty levels and an option to choose your avatar.
# TO-DO
* Offline caching for highscores
* Online multiplayer over WebRTC
* Keyboard + mouse support
* Language support
* Audio/sound effects
|
winning
|
## Inspiration
Ever wish you didn’t need to purchase a stylus to handwrite your digital notes? Each person at some point hasn’t had the free hands to touch their keyboard. Whether you are a student learning to type or a parent juggling many tasks, sometimes a keyboard and stylus are not accessible. We believe the future of technology won’t even need to touch anything in order to take notes. HoverTouch utilizes touchless drawings and converts your (finger)written notes to typed text! We also have a text to speech function that is Google adjacent.
## What it does
Using your index finger as a touchless stylus, you can write new words and undo previous strokes, similar to features on popular note-taking apps like Goodnotes and OneNote. As a result, users can eat a slice of pizza or hold another device in hand while achieving their goal. HoverTouch tackles efficiency, convenience, and retention all in one.
## How we built it
Our pre-trained model from media pipe works in tandem with an Arduino nano, flex sensors, and resistors to track your index finger’s drawings. Once complete, you can tap your pinky to your thumb and HoverTouch captures a screenshot of your notes as a JPG. Afterward, the JPG undergoes a masking process where it is converted to a black and white picture. The blue ink (from the user’s pen strokes) becomes black and all other components of the screenshot such as the background become white. With our game-changing Google Cloud Vision API, custom ML model, and vertex AI vision, it reads the API and converts your text to be displayed on our web browser application.
## Challenges we ran into
Given that this was our first hackathon, we had to make many decisions regarding feasibility of our ideas and researching ways to implement them. In addition, this entire event has been an ongoing learning process where we have felt so many emotions — confusion, frustration, and excitement. This truly tested our grit but we persevered by uplifting one another’s spirits, recognizing our strengths, and helping each other out wherever we could.
One challenge we faced was importing the Google Cloud Vision API. For example, we learned that we were misusing the terminal and our disorganized downloads made it difficult to integrate the software with our backend components. Secondly, while developing the hand tracking system, we struggled with producing functional Python lists. We wanted to make line strokes when the index finger traced thin air, but we eventually transitioned to using dots instead to achieve the same outcome.
## Accomplishments that we're proud of
Ultimately, we are proud to have a working prototype that combines high-level knowledge and a solution with significance to the real world. Imagine how many students, parents, friends, in settings like your home, classroom, and workplace could benefit from HoverTouch's hands free writing technology.
This was the first hackathon for ¾ of our team, so we are thrilled to have undergone a time-bounded competition and all the stages of software development (ideation, designing, prototyping, etc.) toward a final product. We worked with many cutting-edge softwares and hardwares despite having zero experience before the hackathon.
In terms of technicals, we were able to develop varying thickness of the pen strokes based on the pressure of the index finger. This means you could write in a calligraphy style and it would be translated from image to text in the same manner.
## What we learned
This past weekend we learned that our **collaborative** efforts led to the best outcomes as our teamwork motivated us to preserve even in the face of adversity. Our continued **curiosity** led to novel ideas and encouraged new ways of thinking given our vastly different skill sets.
## What's next for HoverTouch
In the short term, we would like to develop shape recognition. This is similar to Goodnotes feature where a hand-drawn square or circle automatically corrects to perfection.
In the long term, we want to integrate our software into web-conferencing applications like Zoom. We initially tried to do this using WebRTC, something we were unfamiliar with, but the Zoom SDK had many complexities that were beyond our scope of knowledge and exceeded the amount of time we could spend on this stage.
### [HoverTouch Website](hoverpoggers.tech)
|
## Inspiration
The whiteboard or chalkboard is an essential tool in instructional settings - to learn better, students need a way to directly transport code from a non-text medium to a more workable environment.
## What it does
Enables someone to take a picture of handwritten or printed text converts it directly to code or text on your favorite text editor on your computer.
## How we built it
On the front end, we built an app using Ionic/Cordova so the user could take a picture of their code. Behind the scenes, using JavaScript, our software harnesses the power of the Google Cloud Vision API to perform intelligent character recognition (ICR) of handwritten words. Following that, we applied our own formatting algorithms to prettify the code. Finally, our server sends the formatted code to the desired computer, which opens it with the appropriate file extension in your favorite IDE. In addition, the client handles all scripting of minimization and fileOS.
## Challenges we ran into
The vision API is trained on text with correct grammar and punctuation. This makes recognition of code quite difficult, especially indentation and camel case. We were able to overcome this issue with some clever algorithms. Also, despite a general lack of JavaScript knowledge, we were able to make good use of documentation to solve our issues.
## Accomplishments that we're proud of
A beautiful spacing algorithm that recursively categorizes lines into indentation levels.
Getting the app to talk to the main server to talk to the target computer.
Scripting the client to display final result in a matter of seconds.
## What we learned
How to integrate and use the Google Cloud Vision API.
How to build and communicate across servers in JavaScript.
How to interact with native functions of a phone.
## What's next for Codify
It's feasibly to increase accuracy by using the Levenshtein distance between words. In addition, we can improve algorithms to work well with code. Finally, we can add image preprocessing (heighten image contrast, rotate accordingly) to make it more readable to the vision API.
|
## Inspiration
As students we know the challenge of sitting overwhelmed in a lecture as more information than we have time to process and write down is being presented. We wanted to build a tool that allows students to focus on the lecture and not worry about taking notes.
## What it does
eyenote tracks eye movements and captures field of view to auto generate notes in a google docs file based on the lesson you’re watching. It also tracks pupil dilation to determine interest / difficulty/ focus in specific sentences/ topics and add summaries and definitions to those topics. Finally, it provides recommendations about your learning in a pop-up based on the analyzed eye tracking.
## How we built it
We use custom LLM-driven OCR tools combined with the Adhawk's MindLink Glasses to pinpoint exactly where a user is looking on a screen and employ direct pupil measurements to estimate drive and positive emotion. We then use google docs APIs to create documents with text from the screen and additional text based on estimated drive. Ultimately, we leverage dynamic prompt templates to harness both user vision and text information with Langchain as part of a Python-Flask backend, providing specific and neurally-tailored recommendations to the end user on a ReactJs frontend - catalyzing performance.
## Challenges we ran into
Our idea relies on eye tracking along with imaging of the field of view so that the information being tracked can be used. However, part way into creating the project we discovered that the adHawk glasses do not come with a camera. Instead, we had to create a baseline starting point for the user and track their relative up, left, right, and down positions and then project those positions onto the device being watched to know what quadrant the user is looking at
## Accomplishments that we're proud of
Firstly, we are proud of our teamwork and organization. Despite the limited time of a hackathon we made sure to set deadlines, discuss approaches with each other and provide valuable feedback.
We are also proud that we did not get discouraged and found a workaround when halfway through the hackathon we discovered that our plan to build our project was not feasible since the AdHawk glasses did not work as anticipated.
## What we learned
We learned that it's very important to narrow down on a specific topic / issue that we want to build or fix, rather than picking a domain and trying to narrow it down from there. We found that when picking a domain, the options are endless and it makes it very hard to pick one and narrow down enough to pick a feasible project.
## What's next for eyenote
With more research and time, eyenote will further understand the connection between the eye and the mind state of a person; this information will be used to further customize the augmented notes created. For example, there will be more distinction for when a summary/ definition should be added versus related novel information. As well, the ML model can provide more personalized recommendations with further training. On an even grander scale, this project was the refinement of our collective interest in the applications of behavioural science and psychology with AI and machine learning.
|
winning
|
Handling personal finances can be a challenging task, and there doesn't exist a natural user experience for engaging with your money. Online banking portals and mobile apps are one-off interactions that don't help people manage their money over the long term. We solved this problem with Alex.
We built Alex with the goal of making it easier to stay on top of your finances through a conversational user interface. We believe that the chatbot as a layer of abstraction over financial information will make managing budgets and exploring transactions easier tasks for people.
Use Alex to look at your bank balances and account summary. See how much you spent on Amazon over the last two months, or take a look at all of your restaurant transactions since you opened your account. You can even send money to your friends.
There were a few technically-challenging problems we had to solve while building Alex. We had to handle OAuth2 and other identification tokens through Facebook and bank account information to ensure security. Allowing the user to make queries in natural language required machine learning and training a model to identify different intents and parameters within a sentence. We even attempted to build a custom solution to maintain long-term memory for our bot—a still unsolved problem in natural language processing.
Alex is first and foremost a consumer product, but we believe that it provides value beyond the individual. With some additions, banks could use Alex to handle their customer support, saving countless hours of phone calls and wasted time on both ends. In a business setting, banks could learn much more about their customers' behavior through interactions with Alex.
|
## Inspiration
Every time we go out with friends, it's always a pain to figure payments for each person. Charging people through Venmo is often tedious and requires lots of time. What we wanted to do was to make the whole process either by just easily scanning a receipt and then being able to charge your friends immediately.
## What it does
Our app takes a picture of a receipt and sends to a python server(that we made) which filters and manipulates the image before performing OCR. Afterwards, the OCR is parsed and the items and associated prices are sent to the main app where the user can then easily charge his friends for use of the service.
## How we built it
We built the front-end of the app using meteor to allow easy reactivity and fast browsing time. Meanwhile, we optimized the graphics so that the website works great on mobile screens. Afterwards, we send the photo data to a flask server where we run combination of python, c and bash code to pre-process and then analyze the sent images. Specifically, the following operations are performed for image processing:
1. RGB to Binary Thresholding
2. Canny Edge Detection
3. Probabilistic Hough Lines on Canny Image
4. Calculation of rotation disparity to warp image
5. Erosion to act as a flood-fill on letters
## Challenges we ran into
We ran into a lot of challenge actively getting the OCR from the receipts. Established libraries such Microsoft showed poor performance. As a result, we ended up testing and creating our own methods for preprocessing and then analyzing the images of receipts we received. We tried many different methods for different steps:
* Different thresholding methods (some of which are documented below)
* Different deskewing algorithms, including hough lines and bounding boxes to calculate skew angle
* Different morphological operators to increase clarity/recognition of texts.
Another difficulty we ran into was implementing UI such that it would run smoothly on mobile devices.
## Accomplishments that we're proud of
We're very proud of the robust parsing algorithm that we ended up creating to classify text from receipts.
## What we learned
The the building of SplitPay, we learned many different techniques in machine vision. We also learned about implementing communication between two web frameworks and about the reactivity used to build Meteor.
## What's next for SplitPay
In the future, we hope to continue the development of SplitPay and to make it easier to use, with easier browsing of friends and more integration with other external APIs, such as ones from Facebook, Microsoft, Uber, etc.
|
## Inspiration
Our inspiration for this project came from newfound research stating the capabilities of models to perform the work of data engineers and provide accurate tools for analysis. We realized that such work is impactful in various sectors, including finance, climate change, medical devices, and much more. We decided to test our solution on various datasets to see the potential in its impact.
## What it does
A lot of things will let you know soon
## How we built it
For our project, we developed a sophisticated query pipeline that integrates a chatbot interface with a SQL database. This setup enables users to make database queries effortlessly through natural language inputs. We utilized SQLAlchemy to handle the database connection and ORM functionalities, ensuring smooth interaction with the SQL database. To bridge the gap between user queries and database commands, we employed LangChain, which translates the natural language inputs from the chatbot into SQL queries. To further enhance the query pipeline, we integrated Llama Index, which facilitates sequential reasoning, allowing the chatbot to handle more complex queries that require step-by-step logic. Additionally, we added a dynamic dashboard feature using Plotly. This dashboard allows users to visualize query results in an interactive and visually appealing manner, providing insightful data representations. This seamless integration of chatbot querying, sequential reasoning, and data visualization makes our system robust, user-friendly, and highly efficient for data access and analysis.
## Challenges we ran into
Participating in the hackathon was a highly rewarding yet challenging experience. One primary obstacle was integrating a large language model (LLM) and chatbot functionality into our project. We faced compatibility issues with our back-end server and third-party APIs, and encountered unexpected bugs when training the AI model with specific datasets. Quick troubleshooting was necessary under tight deadlines.
Another challenge was maintaining effective communication within our remote team. Coordinating efforts and ensuring everyone was aligned led to occasional misunderstandings and delays. Despite these hurdles, the hackathon taught us invaluable lessons in problem-solving, collaboration, and time management, preparing us better for future AI-driven projects.
## Accomplishments that we're proud of
We successfully employed sequential reasoning within the LLM, enabling it to not only infer the next steps but also to accurately follow the appropriate chain of actions that a data analyst would take. This advanced capability ensures that complex queries are handled with precision, mirroring the logical progression a professional analyst would utilize. Additionally, our integration of SQLAlchemy streamlined the connection and ORM functionalities with our SQL database, while LangChain effectively translated natural language inputs from the chatbot into accurate SQL queries. We further enhanced the user experience by implementing a dynamic dashboard with Plotly, allowing for interactive and visually appealing data visualizations. These accomplishments culminated in a robust, user-friendly system that excels in both data access and analysis.
## What we learned
We learned the skills in integrating various APIs along with the sequential process of actually being a data engineer and analyst through the implementation of our agent pipeline.
## What's next for Stratify
For our next steps, we plan to add full UI integration to enhance the user experience, making our system even more intuitive and accessible. We aim to expand our data capabilities by incorporating datasets from various other industries, broadening the scope and applicability of our project. Additionally, we will focus on further testing to ensure the robustness and reliability of our system. This will involve rigorous validation and optimization to fine-tune the performance and accuracy of our query pipeline, chatbot interface, and visualization dashboard. By pursuing these enhancements, we strive to make our platform a comprehensive, versatile, and highly reliable tool for data analysis and visualization across different domains.
|
partial
|
## Inspiration
Predicting the stock market is probably one of the hardest things to do in the financial world. There are so many factors that come into play when considering the rise and fall of positions, and many differing opinions on which of these factors matters the most, from technical indicators to balance sheets. It is hard to dispute, however, that one factor has gained heavy influence in the past few decades: Media. From television to the internet, the modern day media clearly has had a large impact on financial instruments and the economy in general. Taking this as inspiration, we decided to create an app that predicts future stock prices using sentiment analysis of relevant news articles.
## What it does
Mula is an app that provides an intuitive interface for users to manage their stock portfolio and predict changes that traditional methods may not catch in time. It provides a list of stocks that the user currently owns and a platform for trading them, as well as banking to store cash. Upon clicking on a specific stock, Mula provides a more detailed view of the stock’s historical prices, along with predictions of stock price changes from our algorithm coinciding with emotionally charged news. It also serves to provide a comparison between our predictions and actual price changes. Users can also put stocks on a watchlist, where they can get immediate notifications when a financial news article with significant sentiment regarding specific companies is published, as well as a recommended course of action.
## How we built it
The stack consists of a Flutter front end, which allows the app to be multi-platform, and a Python Flask backend connected to a variety of APIs. The Capital One Hackathon API, aka Nessie, is used to manage a bank account, which users of the app store their currently non-invested money. The IEX Cloud API is used to retrieve historical and current pricing data on all individual publicly traded stocks, as well as news articles written about those stocks. The Amazon Comprehend API uses machine learning to analyze those news articles and determine the emotional sentiment behind them, i.e. positive or negative. We then used a weighted mathematical model to combine that sentiment data with Growth, Integrated, and Financial Returns scores from the Goldman Sachs Marquee API to create a prediction score for how the price of an individual stock will move in the coming days.
To ensure the prediction is valid, we used the Marquee API and IEX Cloud API’s historical data and news to backtest this algorithm. We searched for news stories in the past with a significant measured sentiment, and computed a prediction of future price movement from that and the historical Growth, Integrated, and Financial Return scores. We can then look a few days ahead and see how effective this prediction was, and use that to adjust the weights of the prediction model.
## Challenges we ran into
We initially wanted to build our app using React Native, however we ran into problems trying to set up the design language we wanted to work with. We decided to switch to Flutter, which none of us had worked with before. It took some time before we got the hang of it, but ultimately we saw it as the better choice as it gave less problems. We were also working with multiple APIs, and having them be compatible with each other in the backend proved to be quite a laborious process.
## Accomplishments that we're proud of
We are very proud that we were able to successfully find a correlation between news sentiments and stock movements. This is a major finding and could set the foundation for a rise in novel trading algorithms. Additionally, we are happy that we were able to pick up Flutter relatively quickly. Now that we know how to set up apps with it, we may use it again in the future to do the same.
## What we learned
This was our first time doing a financial project for a hackathon, so we learned a lot about bank account management, risk management, technical analysis and fundamental analysis. We also learned how to combine multiple APIs together in a cohesive, valuable way. Additionally, none of us had quite done front end tasks before, so we we’re glad that we were able to learn how to use Flutter.
## What's next for Mula.
We hope to further improve on our prediction modeling algorithms and sentiment analysis with more complex neural networks/deep learning. We would also like to further iterate on the UI/UX of the app to further streamline the stock analysis and prediction process.
|
## Inspiration
Our inspiration for the website was Discord.
Seeing how that software could bring gamers together. We decided we we wanted to do the same thing but with with coworkers, and friends giving them a place where people can relax and have a laugh with people during their free time from work, especially with the pandemic affecting mental health. To bring as many people together to de-stress was our goal.
## What it does
RELIEF is a place for you to take a step back, relax, and collect yourself before getting back into your daily routine. RELIEF has many different ways to help you de-stress, whether your stress is caused by individual, organizational, or environmental factors, we definitely have a way to help you! RELEIF has multiple stress relief options including meditation, gameplay, and an interactive chatroom.
## How we built it
We built it using react with firebase.
## Challenges we ran into
We are all fairly new to web development so we were learning everything on the fly. Getting things in the desired position on the website was a challenge. As well as making the website looks as intended on different screen sizes.
## Accomplishments that we're proud of
We're proud of the final submitted product. The simple chic look gives our website the intended relaxing, stress free environment. We are proud of how much we accomplished with how little knowledge we had to begin with.
## What we learned
We learned that there's a bunch of resources outside of school that teaches you how to code things (YouTube)
We learned our productivity level sky rocketed while participating in the hackathon.
We learned that even in isolation there are still people willing to share their wisdom despite not know who we are, it gives us the motivation to eventually do what the mentor, organizers, and sponsors are doing.
## What's next for RELIEF
We are focusing on developing a mobile app just like how we had it
Adding multiple single player and multi player games for users to enjoy. Adding more functionality to the chat. Adding more music to the meditation room with a more interactive interface.
|
## Inspiration
We were interested in machine learning and data analytics and decided to pursue a real-world application that could prove to have practical use for society. Many themes of this project were inspired by hip-hop artist Cardi B.
## What it does
Money Moves analyzes data about financial advisors and their attributes and uses machine's deep learning unsupervised algorithms to predict if certain financial advisors will most likely be beneficial or detrimental to an investor's financial standing.
## How we built it
We partially created a custom deep-learning library where we built a Self Organizing Map. The Self Organizing Map is a neural network that takes data and creates a layer of abstraction; essentially reducing the dimensionality of the data. To make this happened we had to parse several datasets. We used beautiful soup library, pandas and numpy to parse the data needed. Once it was parsed, we were able to pre-process the data, to feed it to our neural network (Self Organizing Map). After we were able to successfully analyze the data with the deep learning algorithm, we uploaded the neural network and dataset to our Google server where we are hosting a Django website. The website will show investors the best possible advisor within their region.
## Challenges we ran into
Due to the nature of this project, we struggled with moving large amounts of data through the internet, cloud computing, and designing a website to display analyzed data because of the difficult with WiFi connectivity that many hackers faced at this competition. We mostly overcame this through working late nights and lots of frustration.
We also struggled to find an optimal data structure for storing both raw and output data. We ended up using .csv files organized in a logical manner so that data is easier accessible through a simple parser.
## Accomplishments that we're proud of
Successfully parse the dataset needed to do preprocessing and analysis with deeplearing.
Being able to analyze our data with the Self Organizing Map neural network.
Side Note: Our team member Mikhail Sorokin placed 3rd in the Yhack Rap Battle
## What we learned
We learnt how to implement a Self Organizing Map, build a good file system and code base with Django. This led us to learn about Google's cloud service where we host our Django based website. In order to be able to analyze the data, we had to parse several files and format the data that we had to send through the network.
## What's next for Money Moves
We are looking to expand our Self Organizing Map to accept data from other financial dataset, other than stock advisors; this way we are able to have different models that will work together. One way we were thinking is to have unsupervised and supervised deep-learning systems where, we have the unsupervised find the patterns that would be challenging to find; and the supervised algorithm will direct the algorithm to a certain goal that could help investors choose the best decision possible for their financial options.
|
partial
|
## Inspiration
To develop a tool to help students view courses and how other students felt about them,
## What it does
Post questions/answers to forums.
## How we built it
Frontend built using React.js and Tailwind CSS.
Backend using better-sqlite3
## Challenges we ran into
We can't fetch the data from the backend.
## Accomplishments that we're proud of
Learning how to use better-sqlite3.
## What we learned
Learning how to use better-sqlite3 and more react components.
## What's next for Study Hall
Actually getting data from the database.
|
## Inspiration
For students in college — be it online semester or in-person — remembering the various concepts and topics that we need to study is tremendously important. Having access to a list of study tasks, when we need to revise them, and notifications to remind us, can help lower the friction to academic revision.
Based on our team’s findings, there are no other applications on the App Store like this, and although many flash-card apps have spaced repetition built in, not many calendar or study apps do. Hence, we decided to make one ourselves.
## What it does
Users are guided to a main page that displays all their study tasks in a list. They can create new tasks, and set a date by which they want to master their subject. For instance, if a user has a test coming up in a couple months, they can make a study task that has notes for their test, and then the app would remind them to study in specific time intervals so that they continue to consolidate their conceptual understanding.
## How we built it
We started by discussing what features we wanted in our app (frontend) and decided on the backend framework. We then divided the front end and backend to members of our team who were familiar with either aspect.
Frontend: design the basic component structure with React Native, implement main view UI, implement add task view UI, implement logic to add new task to task list, and implement delete task feature
Backend: design database schema (depending on our backend), connect backend API to React Native app, and manage records in the database
## Challenges we ran into
One of the main issues we faced was properly defining what features we wanted in our Minimum Viable Product. We thought of designing the UI/UX and went to Figma, only to realize that we could have better spent the time building out an accessible front end instead. We also thought of creating a date picker that reacted to touchscreen gestures (an improvement over our ‘touch and select’ option), but decided that we would implement it only after other key features have been put in place.
## Accomplishments that we're proud of
Working together to link up the login & registration screen with the backend for the app!
## What we learned
With the hackathon taking place online, we were able to learn the importance of clear communication as we worked together virtually and over different timezones. While we weren't able to learn as much from each other as it was a virtual hackathon, we were able to set clear expectations, communicate our responsibilites well and set timely goals for our workloads.
## What's next for Space Hackers
Moving forward, we'll probably try to implement Machine Learning into our app by using some off-the-shelf models to parse out valuable pieces of information.
|
## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
|
losing
|
## Inspiration
After being overwhelmed by the volume of financial educational tools available, we discovered how the majority of products are focused for institutions or expensive. We decided there needs to be an easy approach to learning about stocks in a more casual environment. Interested in the simplicity of Tinder’s yes or no swiping mechanics, we decided to combine the 2 ideas to create Tickr!
## What it does
Tickr is a stock screening tool designed to help beginner retail investors discover their next trade! Using an intuitive yes or no discovery system through swiping mechanics, Tickr the next Tinder for stocks. For a more in depth video demo, see our [original screen recorded demo video!](https://youtu.be/dU6rF8vymKE)
## How we built it
Our team created this web app using a Node and Express back end paired with a React front end. The back end of our project used 3 linked Supabase tables to host authenticated user information, static information about stocks from the New York stock exchange and NASDAQ. We also used the [Finnhub API](https://finnhub.io/) to get real time metrics about the stocks we were showing our users.
## Challenges we ran into
Our biggest challenge was setting the scope into something that our team could complete in a weekend. We hadn't used Node and Express in a long time, so getting comfortable with our stack again took more time than we thought.
We were also completely new to Supabase and decided to try it out because it sounded really interesting. While Supabase turned out to be incredibly useful and userfriendly, the learning curve for it also took a bit more time than we thought.
## Accomplishments that we're proud of
The two accomplishments we are most proud of are our finished UI and successful integration of the Finnhub API. Drawing inspiration from Tinder, we were able to recreate a similar UI/UX design with minimal help from pre-existing libraries. Further, we were able to design our backend to make seamless API calls to fetch relevant data for our application.
## What we learned
During this project we learned a lot about the power of friendship and anime. Some of us learned what a market cap was and how to write a viable business proposal while others learned more about full stack development and how to host a database on Supabase.
Overall it was a very fun project and we're really glad we were able to get our MVP done 😁✌️
## What's next for Tickr
Our next goal for Tickr is to finish off the aggregate news feed function. This would entail a news feed of all stocks swiped on and provide notification. This would help improve our north star metric of time spent on platform and daily active users!
|
## Inspiration
Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment.
## What it does
Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk.
## How we built it
* Smart Contract for the transfer of funds on the blockchain made using Solidity
* A robust backend and authentication system made using node.js, express.js, and MongoDB.
* Elegant front end made with react-native and Celo's SDK.
## Challenges we ran into
Unfamiliar with the tech stack used to create this project and the BlockChain technology.
## What we learned
We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps.
## What's next for *PoolNVest*
Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds.
Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
|
## Inspiration
We were inspired by the BlackRock challenge to make an app that addresses the financial wellbeing of a particular demographic. We focused on non-investors ages 16 to 25 and tried to resolve the following hesitations around investing:
Lack of knowledge
Risk aversion
No personal interest
In short, a lot of young people don't see themselves as the "investor" type, either because the barrier to entry seems too daunting or because the perceived investor persona does not feel like themselves. Our app aims to fix this, as investing is a habit best learned early!
## What it does
Our web app consists of a fun, simple survey that asks the user a few questions about their habits and personality, alongside their interests and investment goals. These questions are based on demonstrated research linking certain areas of behavior, such as driving and socialization, to risk appetite. Each user is assigned a number corresponding to their level of risk appetite, which is subsequently used along with their interests to find three companies from the S&P 500 that the user may be interested in buying for a first stock pick. These companies are selected on the basis of quantitative factors affecting risk and price expectations (moving averages, volatility, market cap) as well as qualitative factors around what each company does and how it's related to the user's interests.
For example: the user says that they would like to invest $1,000. A subsequent question asks the user if they would bet 10% of that amount on a fair coin flip. Most people are risk averse and would walk away from such an offer, but if the user says that they would take the bet, then their answer will be weighted toward an aggressive risk appetite profile.
More at <https://www.newtradr.com/#/about>
## How we built it
We built the app from scratch in React.js and the financial database in JSON using the Google Finance API. The website is hosted by Netlify and the domain is owned by our team.
## Challenges we ran into
We had to scale back the intention of the project during the Hack. We had wanted to incorporate spending data as well via a Plaid integration and show the user more charts and information at the end of the survey, but we ran into time restraints.
## Accomplishments that we're proud of
We're proud of building such a user-friendly application from start-to-finish and making it publicly available on the web. This tool really works and can get young people into investing!
## What we learned
We learned a lot about Google Finance API, as well as quantitative risk signals such as stochastic oscillators and volatility indexes. In researching the target user demographic, we also learned a lot about how risk is correlated with different behavioral traits.
## What's next for New Tradr
We hope to keep building on this idea to arrive at a more sophisticated tool that can take in even more information about a user to give even more personalized investment recommendations. We believe that such a tool could serve as a compliment to existing trading platforms like Robinhood or Groww, and hope to get in touch with individuals from the FinTech industry about how to scale New Tradr!
|
winning
|
## Inspiration
News frequently reports on vehicle crashes and accidents, with one statistic highlighting the prevalence of heavy truck accidents caused by driver fatigue. Truck drivers endure long hours on the road, delivering shipments nationwide, contributing to the tiredness that can lead to accidents. According to the National Transportation Safety Board, nearly 40% of heavy truck accidents originate from fatigue. In response, we pushed to develop a system capable of monitoring both facial expressions and heartbeats to detect early signs of fatigue among drivers.
## What it does
Our web app boasts two features aimed at improving driver safety: one harnesses computer vision technology to track the driver's face, effectively detecting signs of drowsiness, while the other streams the driver's heartbeat in real-time, providing an additional layer of drowsiness detection. Accessible through our web app is a dedicated page for viewing the webcam feed, which ideally can be monitored via personal devices like smartphones. Should the webcam detect the driver falling asleep, it triggers an alert with flashing lights and a sound to awaken the driver. Additionally, our dashboard feature enables managers to monitor their drivers and their respective drowsiness levels. We've incorporated a graphing feature within the dashboard that dynamically turns red when a selected driver's drowsiness level drops below the acceptable threshold, providing a clear visual indication of potential fatigue.
## How we built it
By combining Reflex and TerraAPI, as well as a companion mobile app in Swift, we were able to create a solution all within our ecosystem. The TerraAPI provided the crucial heartrate data in real time, which we livestreamed through a webhook that our Reflex website could read. The Reflex website also contains a manager-style dashboard for viewing several truckers and collect their unique data all at the same time. As a demo for future mobile usage, we also included a facial recognition and landmarking model to detect drowsiness and alert the user if they are falling asleep. The Swift app also provided additional information such as the heartrate in real time and establishing the connection to the webhook from the wearable device.
## Challenges we ran into
In order to construct the complex data flow of our project, we had to learn several new technologies along the way. It started with developing on a new wearable device with limited documentation and support only through a Swift iOS app, which none of us had experience with. With Reflex, we also encountered some bugs, which all had workarounds, and the difficulties that come with developing any website.
## Accomplishments that we're proud of
We're proud of being able to integrate such complex technologies and orchestrate them in a seamless way. At times, we were afraid that our product wouldn't come together since all the components depended on each other and we needed to complete all of them. However, our team made everything work in the end.
## What we learned
Many of the technologies we worked with during TreeHacks were new and had a large learning curve in order to build our end goal. Along this journey, our team picked up valuable skills in Swift, Python, computer vision, web development, and how to work on 2 hours of sleep.
## What's next for TruckrZzz
We hope to broaden our target audience and not only apply these technologies for truck drivers, but also every day drivers that might need some extra assistance staying awake on the road.
|
## Inspiration
Fortnite. While talking with the Radish team, they mentioned wanting to appeal to more of a Gen Z audience, so we began a process of intensive market research whereby we realized that gamification with a familiar, Fortnite-inspired twist is a great strategy for driving youth engagement.
## What it does
Buy on Radish and complete weekly Radish Restaurant Challenges to earn XP and LEVEL UP with dozens of UNIQUE REWARDS!
## How we built it
NestJS ( :( ) and React. We used TypeORM and Postgres with Docker for the database portion of the backend.
## Challenges we ran into
Using NestJS.
## Accomplishments that we're proud of
We braved the storm of NestJS. Please don't make the same mistake we made.
## What we learned
Not to use NestJS in the future.
## What's next for Radish Battle Pass
Radish will be hiring us shortly to implement this feature. We're sure of it!
|
The inspiration for this project was to help decrease the amount of accidents that occur due to people driving large distances while tired. This largely applies to truck drivers who depend on driving long distances to make a living and can easily become bored or require driving late at night, both of which are known to cause tiredness.
The main functionalities of this web-based application include, detecting when the driver has shown signs of fatigue, when this happens the driver is awaken using an alarm system that will play a sound for a short period of time. Other functionalities of this application include displaying a map to the user which they can use to find directions from one point on the map to another, also displayed on the map are charging stations for electric vehicles that are nearby.
This application was built as a team, we came up with ideas together about how the application should function, what the UI of the application should look like, in addition we all collaborated to solve any errors that members of the team faced.
While working on this project our team encountered many challenges and had to use our problem solving skills to solve them. One of the challenges faced includes displaying a camera generated by a python file on a react website created using JavaScript, this was an issue that we hadn’t initially considered, in order to solve this issue we had to create a connection between the python file and the web-application. After some deliberation our team decided to use websockets to create the connection. Another challenge faced earlier on in the project was getting mapbox to work how we required it to, after some time spent researching the issues faced were fixed and we were able to display the functionalities we wanted.
Some of the accomplishments that we are proud of include, using a face detection algorithm which was used to find the location of a face, and then the eyes of the person, using the location of the eyes we were able to detect signs of drowsiness, most notably when the distance between the upper eyelid and the lower eyelid was very small for a significant number of consecutive frames. Some of our other accomplishments include using an RapidAPI to detect the location of nearby electric vehicle charging locations, and then creating a marker to be place on the mapbox map at the correct longitude and latitude coordinates. One more previously mentioned solved error accomplishment that we were proud to solve was creating the connection between the python file used to detect drowsiness, and the react JavaScript web-based application. Another aspect of the competition that our team did a very good job at is working together on a project at the same time as a team, and being conscious about conflicts that could be created by having multiple people working on the same file at the same time.
Future functionality that could be implemented for WakeyDrivey could include expanding the application help more people with their driving needs. One group that could benefit greatly from this is deaf people, in order to expand this application to include this demographic we would need to add different options for how the driver is to be woken up, the idea here is to have the user buy a wearable wrist tech that would be able to receive a signal from the web-based application and upon receiving a signal signifying that the driver has shown signs of drowsiness, the wearable wrist tech could create a vibration or small shock to wake the user. Some other useful features that could be implemented include, adding functionality to graphics that display to the user when something needs their attention that they are unable to hear, some things that may require their attention include someone honking at them, siren behind them, or other loud noises. In order to show the driver that something needs their attention, the screen of the application could change color, or a vibration could again be used on their wearable wrist device.
|
partial
|
## Inspiration
Fitness bands track your heart rate, but they do not take any action if anything is wrong. This makes them useless for people like heart disease patients who need them the most.
## What it does
Dr Heart connects data to the appropriate people. Using Smooch and Slack, it notifies doctors, families, and emergency crew when the patients' hear rate falls out of the pre-determined upper and lower bound, and enables simple text communication.
## How I built it
Using Microsoft Band's Android API, with Smooch chatting client integrated with Slack.
## Challenges I ran into
Initializing Smooch API properly, connecting to the band, Android build versions, SharedPreferences in Android.
## Accomplishments that I'm proud of
Simple solution for a potentially big problem.
## What I learned
Building Android Application.
## What's next for Dr Heart
* Remote control for doctor
* Accelerometer, barometer, step count integration to eliminate false detection
* Online record database
* Automatic emergency calls
|
## Inspiration
Generative AI has had the ability to overtake the creative domain of artists, and NFTs minted on the blockchain created to help artists maintain control of their work have been violated by fraudsters. Additionally, the unauthorized minting of NFTs can lead to copyright issues with the artist's work. Therefore, we created VibeWire to be a secure platform where the use management and licensing of NFTs is handled through secure contracts and minting as well as our vector-based histogram model with high image comparison accuracy.
## What it does
The front-end of our website enables you to enter your API key and begin minting. By using Pinecone vector-based database, we were able to build a model that can constructs a comparison histogram after comparing RGB pixels in images (to address unwanted replication, corruption, etc.). This model had an accuracy over 90% when we ran it. Our omnichain token minting saves the hash of the image so you and the artist/creator can ensure an image exists and is associated with an NFT.
## How we built it
VerbWire: VerbWire was an offered track, and deployment of smart contracts was easy. Omnichain was a bit difficult, as some of our team members got processing errors. We also sometimes got processing errors when trying to deploy without adding an optional wallet ID. To avoid these issues, we created our own test wallets and deployed ERC-721 (standard NFT contract), as advised by VerbWire co-founder Justin when we met him during mentor office hours.
Three.js: Our teammate used Three.js in building the front-end for our website.
Pinecone, Numpy, and Pillow: Our teammate used Pinecone and these Python packages to help him build histograms that compared RGB values of images in our marketplace being traded, which helped in fractionalizing ownership of these image assets.
## Challenges we ran into
One of the challenges we ran into was endless minting after an API key was entered on our front-end website portal. Another challenge was finding a functional GitHub repository we could add onto, as the localhost command was not working with some repos we came across despite our attempts to clear ports, remove firewalls, etc. (but we eventually found one!).
## Accomplishments that we're proud of
We were able to develop a model that could do an image comparison histogram (the details of which are described above). This involved work with vector-based databases like Pinecone, and it had a high accuracy (90%+).
## What we learned
We learned the power of VerbWire, deploying smart contracts, and NFTs. We learned how to work with image hashes as well as tying the front-end of our minting website to the back-end of our vector-based histogram model. Most of us entered as hackers with no prior use of VerbWire/the blockchain, and we all learned something new!
## What's next for VibeWire
Let's continue to protect more assets of artists: images aren't the only form of art, and we want to help safeguard many more types of artists!
|
**Previously named NeatBeat**
## Inspiration
We wanted to make it easier to understand your pressure, so you **really** are not pressured about what your heart deserves.
## What it does
**Track and chart blood pressure readings** 📈❤️
* Input your systolic and diastolic to be stored on the cloud
* See and compare your current and past blood pressure readings
**Doctor and Patient system** 🩺
* Create doctor and patient accounts
* Doctors will be able to see all their patients and their associated blood pressure readings
* Doctors are able to make suggestions based on their patient readings
* Patients can see suggestions that their doctor makes
## How we built it
Using Django web framework, backend with Python and front development with HTML, Bootstrap and chart.js for charting.
## Challenges we ran into
* Too many front end issues to count 😞
## Accomplishments that we're proud of
* Being able to come together to collaborate and complete a project solely over the internet.
* Successfully splitting the projects into parts and integrating them together for a functioning product
## What we learned
* Hacking under a monumental school workload and a global pandemic
## What's next for NeatBeat
* Mobile-friendly interface
|
partial
|
## Inspiration
The fashion industry is often overlooked when we think about the main suspects to pollution. The industry has been burdened by hidden supply chains, unethical labor practices, and significant environmental damage in various sectors. The UN Environment Programme (UNEP) reports that the fashion industry is the second-largest consumer of water and accounts for approximately 10% of global carbon emissions—exceeding the combined emissions of all international flights and maritime shipping. As consumers, we often only see the final product, overlooking or not even realizing the harmful impacts caused throughout the fashion production process. In our quest for transparency, we investigated the potential of blockchain and identified safety contracts as a crucial solution for ensuring accountability at every stage of the manufacturing process so user's are encouraged to become more selective with the products they purchase.
## What it does
There are two primary participants in the use of safety contracts. First, the admin (such as distributors, manufacturers, or suppliers) signs off on successfully transferring the physical product and its digital twin to the next part of the distribution chain. Secondly, the end users, scanning the final QR code that contains a unique hash tied to the garment and blockchain, allowing them to access and collect a digitized version of the item. This ensures both transparency in the supply chain and a digital representation of the product for users to track.
This decentralized auditing system adds another layer of accountability, as multiple parties independently validate the successful transfer of both the physical product and its digital twin. The distributed nature of this system reduces the risk of corruption or errors that may occur in a centralized system, ensuring that every step in the supply chain is transparent, verified, and traceable.
The practice of falsely portraying products or companies as environmentally friendly—is so widespread in the fashion industry, full transparency is critical to combat this issue. Ultimately, this level of transparency allows consumers to trust the product and make informed decisions, knowing that the garment’s sustainability credentials are genuinely aligned with the practices behind it, eliminating the deceptive practices of greenwashing.
## How we built it
Frontend - TypeScript and Figma
Backend - Motoko
## Challenges we ran into
The first big hurdle we ran into was setting up the frontend of our website. We also ran into issues figuring out how ICP tokens work and how to deploy our project on the blockchain using the main net.
## Accomplishments that we're proud of
We’re extremely proud of being able to implement a difficult concept, none of us had any blockchain experience. We entered the hackathon with knowledge of front-end development and back-end development. We finished with an end-to-end application addressing an important real-world problem to increase sustainability by integrating blockchains.
## What we learned
For this hackathon, we decided to push ourselves and develop a project utilizing blockchain. As this was a new technical area for all of us, these past 36 hours has created an environment for non-stop learning. For one, we learned about what ICP (Internet Computer Protocol) is and the benefits to using its backend software. Furthermore, smart contracts (or canisters) are computational units that developers deploy to the Internet Computer which interact with one another automatically. We also learned about Layer 2 scaling solutions which is the concept of building blockchains on top of each other. After researching and learning about all the ways we can incorporate blockchain into an app, we came to the concept of tracking supply chains using blockchains where all the participants in the supply chain have access to a shared, decentralized ledger and each transaction or change in product status is recorded as a block in this ledger. After we understood the logic in the backend, we evaluated implementation in the frontend and came to the idea of QR Scanning. This was another learning curve for us because none of us have worked extensively with embedding information within a QR code, especially to retrieve the serial code and send a request to the blockchain. The development of this process came with a lot of research as we learning about dfx libraries, the usage of Node.js and development using Motoko. Another lesson for us was the practice, use and importance of user interface design and translating Figma to Typescript. Finally, we implemented API calls to integrate the backend with the front end. Overall, the team learned a lot about blockchain, front-end design and development and API integration through the Hack the Valley 2024 hackathon.
## What's next for VeriThread
Aside for continuing to develop a dedicated mobile app, we would like to implement an incentive where people receive a partial ICP fund, encouraging them to continue shopping sustainably while also adding a gamified element to the experience. We would also like to expand into new markets and integrate VeriThread’s blockchain technology with more retail partners, making it easier for people to shop ethically.
|
Hospital visits can be an uneasy and stressful time for parents and children. Since parent's aren't always able to be present during children's long hospital stays, MediFeel tries to facilitate the communication process between the child, parent, and doctors and nurses.
The child is able to send a status update about their feelings, and if they need doctor assistance, are able to call for help. The parents are notified of all of this and the doctor can easily communicate what happened. If a child is feeling upset, the parents would know to better reach out to child earlier than they would have otherwise.
For future implementation, the UI of the website would look something closer to this:
[link](https://invis.io/J4FBSWXEKZF#/273379990_Desktop_HD)
|
**What inspired us**
Despite the prevalence of LLMs increasing, their power still hasn't been leveraged to improve the experience of students during class. In particular, LLMs are often discouraged by professors, often because they often give inaccurate or too much information. To remedy this issue, we created an LLM that has access to all the information for a course, including the course information, lecture notes, and problem sets. Furthermore, in order for this to be useful for actual courses, we made sure for the LLM to not answer specific questions about the problem set. Instead, the LLM guides the student and provides relevant information for the student to complete the coursework without providing students with the direct answer. This essentially serves as a TA for students to help them navigate their problem sets.
**What we learned**
Through this project, we delved into the complexities of integrating AI with software solutions, uncovering the essential role of user interface design and the nuanced craft of prompt engineering. We learned that crafting effective prompts is crucial, requiring a deep understanding of the AI’s capabilities and the project's specific needs. This process taught us the importance of precision and creativity in prompt engineering, where success depends on translating educational objectives into prompts that generate meaningful AI responses.
Our exploration also introduced us to the concept of retrieval-augmented generation (RAG), which combines the power of information retrieval with generative models to enhance the AI's ability to produce relevant and contextually accurate outputs. While we explored the potentials of using the OpenAI and Together APIs to enrich our project, we ultimately did not incorporate them into our final implementation. This exploration, however, broadened our understanding of the diverse AI tools available and their potential applications. It underscored the importance of selecting the right tools for specific project needs, balancing between the cutting-edge capabilities of such APIs and the project's goals. This experience highlighted the dynamic nature of AI project development, where learning about and testing various tools forms a foundational part of the journey, even if some are not used in the end.
**How we built our project**
Building our project required a strategic approach to assembling a comprehensive dataset from the Stanford CS 106B course, which included the syllabus, problem sets, and lectures. This effort ensured our AI chatbot was equipped with a detailed understanding of the course's structure and content, setting the stage for it to function as an advanced educational assistant. Beyond the compilation of course materials, a significant portion of our work focused on refining an existing chatbot user interface (UI) to better serve the specific needs of students engaging with the course. This task was far from straightforward; it demanded not only a deep dive into the chatbot's underlying logic but also innovative thinking to reimagine how it interacts with users. The modifications we made to the chatbot were extensive and targeted at enhancing the user experience by adjusting the output behavior of the language learning model (LLM).
A pivotal change involved programming the chatbot to moderate the explicitness of its hints in response to queries about problem sets. This adjustment required intricate tuning of the LLM's output to strike a balance between guiding students and stimulating independent problem-solving skills. Furthermore, integrating direct course content into the chatbot’s responses necessitated a thorough understanding of the LLM's mechanisms to ensure that the chatbot could accurately reference and utilize the course materials in its interactions. This aspect of the project was particularly challenging, as it involved manipulating the chatbot to filter and prioritize information from the course data effectively. Overall, the effort to modify the chatbot's output capabilities underscored the complexity of working with advanced AI tools, highlighting the technical skill and creativity required to adapt these systems to meet specific educational objectives.
**Challenges we faced**
Some challenges we faced included scoping our project to ensure that it is feasible given the constraints we had for this hackathon including time. We learned React.js and PLpgSQL for our project since we had only used JavaScript previously. Other challenges we faced were installing Docker, Supabase CLI, and ensuring all dependencies are properly managed. Moreover, we also had to configure Supabase and create the database schema. There were also deployment configuration issues as we had to integrate our front-end application with our back-end to ensure that they are communicating properly.
|
partial
|
## Inspiration
Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment.
## What it does
Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk.
## How we built it
* Smart Contract for the transfer of funds on the blockchain made using Solidity
* A robust backend and authentication system made using node.js, express.js, and MongoDB.
* Elegant front end made with react-native and Celo's SDK.
## Challenges we ran into
Unfamiliar with the tech stack used to create this project and the BlockChain technology.
## What we learned
We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps.
## What's next for *PoolNVest*
Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds.
Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
|
## Inspiration
In a world where finance is extremely important, everyone needs access to **banking services**. Citizens within **third world countries** are no exception, but they lack the banking technology infrastructure that many of us in first world countries take for granted. Mobile Applications and Web Portals don't work 100% for these people, so we decided to make software that requires nothing more than a **cellular connection to send SMS messages** in order to operate. This resulted in our hack, **UBank**.
## What it does
**UBank** allows users to operate their bank accounts entirely through **text messaging**. Users can deposit money, transfer funds between accounts, transfer accounts to other users, and even purchases shares of stock via SMS. In addition to this text messaging capability, UBank also provides a web portal so that when our users gain access to a steady internet connection or PC, they can view their financial information on a more comprehensive level.
## How I built it
We set up a backend HTTP server in **Node.js** to receive and fulfill requests. **Twilio** with ngrok was used to send and receive the text messages through a webhook on the backend Node.js server and applicant data was stored in firebase. The frontend was primarily built with **HTML, CSS, and Javascript** and HTTP requests were sent to the Node.js backend to receive applicant information and display it on the browser. We utilized Mozilla's speech to text library to incorporate speech commands and chart.js to display client data with intuitive graphs.
## Challenges I ran into
* Some team members were new to Node.js, and therefore working with some of the server coding was a little complicated. However, we were able to leverage the experience of other group members which allowed all of us to learn and figure out everything in the end.
* Using Twilio was a challenge because no team members had previous experience with the technology. We had difficulties making it communicate with our backend Node.js server, but after a few hours of hard work we eventually figured it out.
## Accomplishments that I'm proud of
We are proud of making a **functioning**, **dynamic**, finished product. It feels great to design software that is adaptable and begging for the next steps of development. We're also super proud that we made an attempt at tackling a problem that is having severe negative effects on people all around the world, and we hope that someday our product can make it to those people.
## What I learned
This was our first using **Twillio** so we learned a lot about utilizing that software. Front-end team members also got to learn and practice their **HTML/CSS/JS** skills which were a great experience.
## What's next for UBank
* The next step for UBank probably would be implementing an authentication/anti-fraud system. Being a banking service, it's imperative that our customers' transactions are secure at all times, and we would be unable to launch without such a feature.
* We hope to continue the development of UBank and gain some beta users so that we can test our product and incorporate customer feedback in order to improve our software before making an attempt at launching the service.
|
## Inspiration
I've always been interested in learning about the various methods of investing and how to generate multiple passive income streams. When I found out that 43% of millennials don't know where to get started in the stock market, I wanted to create an app that could educate individuals on why certain stocks are beating the market and how they can get started with their investment budget.
## What it does
The homepage focuses on the top gainers of the week and explains why they are doing so well. I also used an external library (react-native charts) to display stock charts for the current week. The second screen is a newsroom where users can read about various companies and build their knowledge. The last screen is a calculator where users can input their investment budget and it will render out information on which areas of the market they should invest in.
## How we built it
This app is built with React Native, but uses data from a stock API called 'Alpha Vantage'. I also used react-native charts to display the stock charts of the current week.
## Challenges we ran into
I had trouble conditionally rendering the different information topics based on what the user inputed. I also had to spend a lot of time researching the different topics so finishing on time was definitely a big challenge.
## Accomplishments that we're proud of
I am really proud of the design of the overall app. I feel like it could have been a bit better, especially the newsroom but overall, I am happy with the design of the app.
## What we learned
As I was researching topics for this app, I also learned many different investing strategies myself that I am so excited to try out!
## What's next for StockUp
I hope to link this app to a news API so that it keeps updating automatically everyday. I also would like to add user authentication so that users can have their own personal account where they can add stocks to their watchlist.
|
winning
|
## Inspiration
We felt that there was a lack of accessible academia and analytics on text data.
## What it does
It takes in a text or youtube comment section that the user inputs and spits out a quick TL;DR summary of what the text is saying and give us a quick statistical breakdown.
## How we built it
We used python, co:here api, Youtube data api v3, and flask to develop our project. Python was our language of choice as we implemented the two api's. Flask was used for our web development.
## Challenges we ran into
INTERNET,
## Accomplishments that we're proud of
Learning to use API's
## What we learned
There are a lot of cool API's out there that can do many cool things.
## What's next for TL:DR
We would like to further develop our project website and work out all the little bugs. One of our ideas is to combine cohere and data science to provide in-depth analytics about a piece of text such as a distribution of sentence sentiment
|
## We began thinking about this idea the night before the competition, while we were all frantically trying to finish homework since we would be at Calhacks over the entire weekend. Additionally, a few members of our team have ADHD, and focusing for extended periods of time can be difficult. We looked at some other websites that claimed to have similar functionality, but they often failed to effectively summarize anything beyond a basic news article.
## Our software takes in an image, and after parsing the layout of the document, it creates a summary of the document using custom models that we made with Cohere's API, so that a more accurate summary can be created depending on the subject and type of content.
## We used a library called LayoutParser to identify relevant text from the submitted image, and we took advantage of some publicly available datasets and Cohere's Finetune feature to create our own models for different types of documents. The website itself was programmed using HTML, CSS, and Vanilla JS integrated with Flask.
## This was our first time creating anything with this sort of functionality, so integrating the front-end and back-end was new to us. Understanding and reformatting the datasets was also confusing at first since we don't have much experience with data processing, but we were eventually able to get existing datasets into the format that we wanted.
## We're really happy about everything, since it's our first hackathon and project of this kind in general. There's a lot we could do better, but we're proud that we were able to actually have something to submit.
## We learned a lot about frameworks like Flask to integrate the back-end with the front-end of the website, and we learned a lot about data processing. Cohere's API also introduced us to the field of NLP, and some of the advanced functionality that powerful models can offer.
## We would like to create more models for other subjects and document types that we and our friends regularly deal with, like documents that are in old english or wikipedia/wikimedia pages.
|
## Inspiration
Has your browser ever looked like this?

... or this?

Ours have, *all* the time.
Regardless of who you are, you'll often find yourself working in a browser on not just one task but a variety of tasks. Whether its classes, projects, financials, research, personal hobbies -- there are many different, yet predictable, ways in which we open an endless amount of tabs for fear of forgetting a chunk of information that may someday be relevant.
Origin aims to revolutionize your personal browsing experience -- one workspace at a time.
## What it does
In a nutshell, Origin uses state-of-the-art **natural language processing** to identify personalized, smart **workspaces**. Each workspace is centered around a topic comprised of related tabs from your browsing history, and Origin provides your most recently visited tabs pertaining to that workspace and related future ones, a generated **textual summary** of those websites from all their text, and a **fine-tuned ChatBot** trained on data about that topic and ready to answer specific user questions with citations and maintaining history of a conversation. The ChatBot not only answers general factual questions (given its a foundation model), but also answers/recalls specific facts found in the URLs/files that the user visits (e.g. linking to a course syllabus).
Origin also provides a **semantic search** on resources, as well as monitors what URLs other people in an organization visit and recommend pertinent ones to the user via a **recommendation system**.
For example, a college student taking a History class and performing ML research on the side would have sets of tabs that would be related to both topics individually. Through its clustering algorithms, Origin would identify the workspaces of "European History" and "Computer Vision", with a dynamic view of pertinent URLs and widgets like semantic search and a chatbot. Upon continuing to browse in either workspace, the workspace itself is dynamically updated to reflect the most recently visited sites and data.
**Target Audience**: Students to significantly improve the education experience and industry workers to improve productivity.
## How we built it

**Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS
**Frameworks and Tools**: Firebase ∙ React.js ∙ Flask ∙ LangChain ∙ OpenAI ∙ HuggingFace
There are a couple of different key engineering modules that this project can be broken down into.
### 1(a). Ingesting Browser Information and Computing Embeddings
We begin by developing a Chrome Extension that automatically scrapes browsing data in a periodic manner (every 3 days) using the Chrome Developer API. From the information we glean, we extract titles of webpages. Then, the webpage titles are passed into a pre-trained Large Language Model (LLM) from Huggingface, from which latent embeddings are generated and persisted through a Firebase database.
### 1(b). Topical Clustering Algorithms and Automatic Cluster Name Inference
Given the URL embeddings, we run K-Means Clustering to identify key topical/activity-related clusters in browsing data and the associated URLs.
We automatically find a description for each cluster by prompt engineering an OpenAI LLM, specifically by providing it the titles of all webpages in the cluster and requesting it to output a simple title describing that cluster (e.g. "Algorithms Course" or "Machine Learning Research").
### 2. Web/Knowledge Scraping
After pulling the user's URLs from the database, we asynchronously scrape through the text on each webpage via Beautiful Soup. This text provides richer context for each page beyond the title and is temporarily cached for use in later algorithms.
### 3. Text Summarization
We split the incoming text of all the web pages using a CharacterTextSplitter to create smaller documents, and then attempt a summarization in a map reduce fashion over these smaller documents using a LangChain summarization chain that increases the ability to maintain broader context while parallelizing workload.
### 4. Fine Tuning a GPT-3 Based ChatBot
The infrastructure for this was built on a recently-made popular open-source Python package called **LangChain** (see <https://github.com/hwchase17/langchain>), a package with the intention of making it easier to build more powerful Language Models by connecting them to external knowledge sources.
We first deal with data ingestion and chunking, before embedding the vectors using OpenAI Embeddings and storing them in a vector store.
To provide the best chat bot possible, we keep track of a history of a user's conversation and inject it into the chatbot during each user interaction while simultaneously looking up relevant information that can be quickly queries from the vector store. The generated prompt is then put into an OpenAI LLM to interact with the user in a knowledge-aware context.
### 5. Collaborative Filtering-Based Recommendation
Provided that a user does not turn privacy settings on, our collaborative filtering-based recommendation system recommends URLs that other users in the organization have seen that are related to the user's current workspace.
### 6. Flask REST API
We expose all of our LLM capabilities, recommendation system, and other data queries for the frontend through a REST API served by Flask. This provides an easy interface between the external vendors (like LangChain, OpenAI, and HuggingFace), our Firebase database, the browser extension, and our React web app.
### 7. A Fantastic Frontend
Our frontend is built using the React.js framework. We use axios to interact with our backend server and display the relevant information for each workspace.
## Challenges we ran into
1. We had to deal with our K-Means Clustering algorithm outputting changing cluster means over time as new data is ingested, since the URLs that a user visits changes over time. We had to anchor previous data to the new clusters in a smart way and come up with a clever updating algorithm.
2. We had to employ caching of responses from the external LLMs (like OpenAI/LangChain) to operate under the rate limit. This was challenging, as it required revamping our database infrastructure for caching.
3. Enabling the Chrome extension to speak with our backend server was a challenge, as we had to periodically poll the user's browser history and deal with CORS (Cross-Origin Resource Sharing) errors.
4. We worked modularly which was great for parallelization/efficiency, but it slowed us down when integrating things together for e2e testing.
## Accomplishments that we're proud of
The scope of ways in which we were able to utilize Large Language Models to redefine the antiquated browsing experience and provide knowledge centralization.
This idea was a byproduct of our own experiences in college and high school -- we found ourselves spending significant amounts of time attempting to organize tab clutter systematically.
## What we learned
This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves.
## What's next for Origin
We believe Origin will become even more powerful at scale, since many users/organizations using the product would improve the ChatBot's ability to answer commonly asked questions, and the recommender system would perform better in aiding user's education or productivity experiences.
|
losing
|
## Inspiration
**Getting stuck at a red light at night when there are no other cars, as well as the TedTalk by Gary Lauder "New traffic sign takes turns".**
## What it does
**We use the CityIQ traffic camera sensor to detect when a car is approaching a light and trigger a change to green when there are no other cars around so that the car does not have to stop and re-accelerate and/or idle at the light.**
## How we built it
**We used the CityIQ python example to access the Rest API of CityIQ to get historical data and we used the Websocket to get live information on traffic.**
|
## Inspiration
The University of Ottawa prides itself on being a bilingual university, offering both classes in French and in English. However, while students are able to learn in their preferred language, unformal communication (text messages and voice memos) between students is still limited to a common language. As this is often English, native French speakers do not get to communicate in their preferred language. To make communication more accessible in everyone's preferred language, we aimed to create a messaging app that would automatically translate incoming messages and generate a voice memo in the likeness of the sender, all in the recipients preferred language. Therefore allowing students to listen to text messages in their preferred language in the voice of their friends!
## What it does
Duality is messaging web app that allows you to read and listen to text messages in your preferred language, but in the voice of the sender. For example, if the user's preferred language is French, incoming English text messages will be translated to French and a audio recording of the French text will be generated using a voice clone of the sender.
While our original goal was to make communication in ones preferred language more accessible, Duality also has other use cases:
* It can help speakers with speech impairments communicate more effectively by generating speech in their own voice
* Users can listen to text messages in a foreign language that they are trying to learn, with the ability to compare to the original
* Users can listen to what they sounds like speaking a foreign language to improve pronunciation while learning
## How we built it
The frontend was built using React and JavaScript, while the backend was built using python and Node.js. The voice cloning was done using a xTTSv2, a text to speech foundation model that is freely available from Coqui-ai's TTS library. The text to speech model was hosted on a docker container.
## Challenges we ran into and what we learned
Setting up an AI model on the backend was unfamiliar to us and was a very rewarding experience. We learned about Firebase, Firestore, and Cloud Storage using buckets, which were all unfamiliar concepts before the Hackathon.
## Accomplishments that we're proud of
It was fun to listen to our own voices speaking other languages and with different accents!
## What's next for Duality
Our original plan of making mobile app had to be pivoted to a web app because of the time constraints of the hackathon. A natural next step for Duality would be to make a mobile app with the same features or polish the UX of the web app.
## Footnotes
This project was made by 2 UBC students (and 1 UOttawa student), and brought to you by Tuum Est!

|
## Inspiration
We aimed to build smart cities by helping garbage truck drivers receive the most efficient route in near-real-time, minimizing waste buildup in prone areas. This approach directly addresses issues of inefficiency in traditional waste collection, which often relies on static daily or weekly routes.
You might be wondering, why not just let drivers follow standard routes? In densely populated areas, waste accumulates faster than in less populated zones. This means that a one-size-fits-all route approach leads to inefficient pickups, resulting in garbage buildup that contributes to air and water pollution. By targeting areas where waste accumulates more quickly, we can reduce contamination, improve air quality, and create greener, healthier environments. Additionally, optimizing waste collection can lead to more sustainable use of resources and reduce the strain on landfills.
## What it does
CleanSweep has sensors that collect real-time information about all the trash cans in the city, detecting different waste levels. This data is live-updated in the truck drivers' main portal while they drive, allowing them to receive the most efficient collection routes. These routes are optimized based on real-time data, including the percentage of trash bin capacity filled, traffic conditions between bins, and the number of days since the last pickup. As drivers collect trash, they can update their progress live to receive the next optimized route for the remaining bins.
## How we built it
For our hardware: We used two phone cameras in two similar scenarios that sent photos repeatedly to a computer. This image is then passed onto the Raspberry Pi where Python OpenCV was used to detect the trash level filled. Then a Node.JS script was used to display the levels (1 to 5) on a set of LEDs and then pass the information to a local server for further processing on the software side.
Backend: We received the reported trash level data from the hardware, information about the traffic in the surrounding vicinity as well as distance to the bin from the Google Maps API and Google Distance Matrix API, and pre-modeled values for time since last garbage retrieval and fed this into a Random Forest Classifier model on Databricks. Our model makes predictions on how to prioritize the bin routes in order to get a short path and distance–resulting in fewer emissions. An adjacency matrix was then used to retrieve the highest priority paths based on traffic and waste levels.
Frontend: We used React for the UI and TailwindCSS for styling to create the portal, deployed over Terraform. We brought all of this information together to display the recommended optimal routes based on how full the bins were (which can be changed by adding more trash to bin 1 and 2 in real life).
## Challenges we ran into
Measuring the level of the trash bin was actually a very challenging task for what seemed like a simple task. We first tried looking for pre-trained models that could do it for us, but other models only worked with object tracking. We went to OpenCV, but that only worked in certain conditions. Thus, one of the hardest challenges we experienced was making sure the lighting conditions, the setup for the cameras and hardware, and our OpenCV contour algorithm worked deterministically, as even a shadow could mess with our results.
## Accomplishments that we're proud of
None of us had ever worked with OpenCV or with using hardware at a hackathon. We wanted to take some of the skills we learned in our electrical classes and implement them with a cool solution, even if it was a bit tacky. We were super proud of learning how OpenCV worked and how we could use hardware as well as a network to create a complete interactive solution. Another major component of this hackathon was hardware as we’ve always wanted to use such a concept to visualize our project and make it easier to understand for everyone on what our goals are. We enjoyed our varied skills by stepping out of our comfort zone and using new technologies; each member used a new skill to implement in the project.
## What we learned
All of us learned about the parts of networks and hardware required to communicate including the security standards and communication frequency needed to keep all the data real time. We learned how to build a working website with a login system through the use of react and vite, as well as implementing the Google maps API for visualizing the route for the drivers. We also learned how GPIO works on a raspberry pi and how a simple JS script can control the power output for these pins to control LEDs. Another skill we found interesting was the use of OpenCV for image processing as it made computing the percentage a task of a second rather than a longer time of manual image processing algorithms. Finally, we were able to start using an ML model to train and help us find the best routes for drivers, this resulted in faster travel times as traffic was avoided and garbage collection optimized.
## What's next for Clean Sweep
Clean Sweep’s next goals would be to implement real time sensors in bins spread out all over the city to collect data and help train the model by providing the large amounts of data. This would then help implement this real time UI to efficiently visit all the necessary stops for the day. Such a system would also allow for coverage over rare cases such as a garbage truck missing a stop or allowing other trucks to take over a broken truck’s schedule as the route would update in real time to collect all garbage fast and efficiently.
|
losing
|
## Inspiration
I was walking down the streets of Toronto and noticed how there always seemed to be cigarette butts outside of any building. It felt disgusting for me, especially since they polluted the city so much. After reading a few papers on studies revolving around cigarette butt litter, I noticed that cigarette litter is actually the #1 most littered object in the world and is toxic waste. Here are some quick facts
* About **4.5 trillion** cigarette butts are littered on the ground each year
* 850,500 tons of cigarette butt litter is produced each year. This is about **6 and a half CN towers** worth of litter which is huge! (based on weight)
* In the city of Hamilton, cigarette butt litter can make up to **50%** of all the litter in some years.
* The city of San Fransico spends up to $6 million per year on cleaning up the cigarette butt litter
Thus our team decided that we should develop a cost-effective robot to rid the streets of cigarette butt litter
## What it does
Our robot is a modern-day Wall-E. The main objectives of the robot are to:
1. Safely drive around the sidewalks in the city
2. Detect and locate cigarette butts on the ground
3. Collect and dispose of the cigarette butts
## How we built it
Our basic idea was to build a robot with a camera that could find cigarette butts on the ground and collect those cigarette butts with a roller-mechanism. Below are more in-depth explanations of each part of our robot.
### Software
We needed a method to be able to easily detect cigarette butts on the ground, thus we used computer vision. We made use of this open-source project: [Mask R-CNN for Object Detection and Segmentation](https://github.com/matterport/Mask_RCNN) and [pre-trained weights](https://www.immersivelimit.com/datasets/cigarette-butts). We used a Raspberry Pi and a Pi Camera to take pictures of cigarettes, process the image Tensorflow, and then output coordinates of the location of the cigarette for the robot. The Raspberry Pi would then send these coordinates to an Arduino with UART.
### Hardware
The Arduino controls all the hardware on the robot, including the motors and roller-mechanism. The basic idea of the Arduino code is:
1. Drive a pre-determined path on the sidewalk
2. Wait for the Pi Camera to detect a cigarette
3. Stop the robot and wait for a set of coordinates from the Raspberry Pi to be delivered with UART
4. Travel to the coordinates and retrieve the cigarette butt
5. Repeat
We use sensors such as a gyro and accelerometer to detect the speed and orientation of our robot to know exactly where to travel. The robot uses an ultrasonic sensor to avoid obstacles and make sure that it does not bump into humans or walls.
### Mechanical
We used Solidworks to design the chassis, roller/sweeper-mechanism, and mounts for the camera of the robot. For the robot, we used VEX parts to assemble it. The mount was 3D-printed based on the Solidworks model.
## Challenges we ran into
1. Distance: Working remotely made designing, working together, and transporting supplies challenging. Each group member worked on independent sections and drop-offs were made.
2. Design Decisions: We constantly had to find the most realistic solution based on our budget and the time we had. This meant that we couldn't cover a lot of edge cases, e.g. what happens if the robot gets stolen, what happens if the robot is knocked over ...
3. Shipping Complications: Some desired parts would not have shipped until after the hackathon. Alternative choices were made and we worked around shipping dates
## Accomplishments that we're proud of
We are proud of being able to efficiently organize ourselves and create this robot, even though we worked remotely, We are also proud of being able to create something to contribute to our environment and to help keep our Earth clean.
## What we learned
We learned about machine learning and Mask-RCNN. We never dabbled with machine learning much before so it was awesome being able to play with computer-vision and detect cigarette-butts. We also learned a lot about Arduino and path-planning to get the robot to where we need it to go. On the mechanical side, we learned about different intake systems and 3D modeling.
## What's next for Cigbot
There is still a lot to do for Cigbot. Below are some following examples of parts that could be added:
* Detecting different types of trash: It would be nice to be able to gather not just cigarette butts, but any type of trash such as candy wrappers or plastic bottles, and to also sort them accordingly.
* Various Terrains: Though Cigbot is made for the sidewalk, it may encounter rough terrain, especially in Canada, so we figured it would be good to add some self-stabilizing mechanism at some point
* Anti-theft: Cigbot is currently small and can easily be picked up by anyone. This would be dangerous if we left the robot in the streets since it would easily be damaged or stolen (eg someone could easily rip off and steal our Raspberry Pi). We need to make it larger and more robust.
* Environmental Conditions: Currently, Cigbot is not robust enough to handle more extreme weather conditions such as heavy rain or cold. We need a better encasing to ensure Cigbot can withstand extreme weather.
## Sources
* <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3397372/>
* <https://www.cbc.ca/news/canada/hamilton/cigarette-butts-1.5098782>
* [https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:~:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years](https://www.nationalgeographic.com/environment/article/cigarettes-story-of-plastic#:%7E:text=Did%20You%20Know%3F-,About%204.5%20trillion%20cigarettes%20are%20discarded%20each%20year%20worldwide%2C%20making,as%20long%20as%2010%20years).
|
## Inspiration
Let's start by taking a look at some statistics on waste from Ontario and Canada. In Canada, only nine percent of plastics are recycled, while the rest is sent to landfills. More locally, in Ontario, over 3.6 million metric tonnes of plastic ended up as garbage due to tainted recycling bins. Tainted recycling bins occur when someone disposes of their waste into the wrong bin, causing the entire bin to be sent to the landfill. Mark Badger, executive vice-president of Canada Fibers, which runs 12 plants that sort about 60 percent of the curbside recycling collected in Ontario has said that one in three pounds of what people put into blue bins should not be there. This is a major problem, as it is causing our greenhouse gas emissions to grow exponentially. However, if we can reverse this, not only will emissions lower, but according to Deloitte, around 42,000 new jobs will be created. Now let's turn our focus locally. The City of Kingston is seeking input on the implementation of new waste strategies to reach its goal of diverting 65 percent of household waste from landfill by 2025. This project is now in its public engagement phase. That’s where we come in.
## What it does
Cycle AI is an app that uses machine learning to classify certain articles of trash/recyclables to incentivize awareness of what a user throws away. You simply pull out your phone, snap a shot of whatever it is that you want to dispose of and Cycle AI will inform you where to throw it out as well as what it is that you are throwing out. On top of that, there are achievements for doing things such as using the app to sort your recycling every day for a certain amount of days. You keep track of your achievements and daily usage through a personal account.
## How We built it
In a team of four, we separated into three groups. For the most part, two of us focused on the front end with Kivy, one on UI design, and one on the backend with TensorFlow. From these groups, we divided into subsections that held certain responsibilities like gathering data to train the neural network. This was done by using photos taken from waste picked out of, relatively unsorted, waste bins around Goodwin Hall at Queen's University. 200 photos were taken for each subcategory, amounting to quite a bit of data by the end of it. The data was used to train the neural network backend. The front end was all programmed on Python using Kivy. After the frontend and backend were completed, a connection was created between them to seamlessly feed data from end to end. This allows a user of the application to take a photo of whatever it is they want to be sorted, having the photo feed to the neural network, and then returned to the front end with a displayed message. The user can also create an account with a username and password, for which they can use to store their number of scans as well as achievements.
## Challenges We ran into
The two hardest challenges we had to overcome as a group was the need to build an adequate dataset as well as learning the framework Kivy. In our first attempt at gathering a dataset, the images we got online turned out to be too noisy when grouped together. This caused the neural network to become overfit, relying on patterns to heavily. We decided to fix this by gathering our own data. I wen around Goodwin Hall and went into the bins to gather "data". After washing my hands thoroughly, I took ~175 photos of each category to train the neural network with real data. This seemed to work well, overcoming that challenge. The second challenge I, as well as my team, ran into was our little familiarity with Kivy. For the most part, we had all just began learning Kivy the day of QHacks. This posed to be quite a time-consuming problem, but we simply pushed through it to get the hang of it.
## 24 Hour Time Lapse
**Bellow is a 24 hour time-lapse of my team and I work. The naps on the tables weren't the most comfortable.**
<https://www.youtube.com/watch?v=oyCeM9XfFmY&t=49s>
|
## Inspiration
I thought about this project to solve a big problem we have nowadays. Plastic pollution is a critical threat for our world and we need to act as soon as possible to save this world. In fact globally there are about 8.3 billion tons of plastic from which 6.3 billion tons are pure trash. This project aims to detect different types of plastic and collect it from our cities and from locations considered dangerous for a human being.
## What it does and how I built it
Plastic Buster is the idea of a robot capable of moving autonomously in different locations, capable of detecting and classifying different of plastic and capable of colleting it from cities and other various dangerous or unaccesible locations for human beings such as power plants.
This robot is controlled by a central board named Arduino which is programmed in the C language. Actually there are 2 arduinos working together in fact they are connected by A4->SDA and A5->SCL connections. One arduino, which is the Arduino UNO, is connected to motor shied l293d shield to control the motors, to the 2 hc-sr04 ultrasonic sensors, to 2 IR sensors and a buzzer. The other arduino, which is a nano, is connected to the color sensor RGB TCS3200.
The robot is capable of moving autonomously thanks to the 2 ultrasonic sensors and it's very accurate with the addon of the 2 IR sensors on the front edges. When the robot finds an obstacle with the lower ultrasonic sensor and the upper one means it's a big obstacle, so it changes it's direction. But when it founds an obstacle only with the lower sensor, it does a recognition of the color of the object and if this color is already insert in the arduino nano's database it sends the signal to the Arduino UNO which makes an acustic sound with the buzzer.
For this prototype I used the color sensor to analyze the object but the final Product should have a camera and a spectrometer for a better analyzies. I wasn't able to use the spectrometer because it costs a lot and I had also a small amount of time to build it.
As for the code, I wrote the working code for the Arduino Uno from scratch, but for the color sensor I implemented the code written by Xtronical who used the color sensor to detect skittles colours. I had to modify the code a lot in order to make the system recognize only a particular color and then send the data to the main Arduino in order to make the sound with the buzzer.
## Challenges we ran into
Because of time and funds shortage I wasn't able to build the ultimate version but I came up with a prototype. The robot I built is to demostrate how the idea can be implemented. As it has been made from scratch there were various difficulties and problems. There were many obstacles from the 3d printing parts to the hardware to the software. The first challenge I had was to print the parts in 3d because I had problems with the filament of my stamper but somehow I solved the problem. Another challenge was trying to connect two arduinos together as in this way some of the pins of the board weren't working anymore.
## Accomplishments that we're proud of
I'm proud that I finished in time, because having only 48 hours for this hybrid event it was a lot of work to do. I'm proud of how much I learned during these 48 hours.
## What we learned
During this hackathon I learned at lot. For example I learned how to manage my time in order to take my productivity to the peak. I learned in deep how 3d printing works as I am a beginner in this area. I learned how EEPROM memory works in arduino as I didn't even know that it existed.
## What's next for Plastic Buster
As I said my goal was to build something to make this world a better place to live. I think this project has lots of potential and can turn to reality with no much difficulty. The final idea can be implemented in various locations with various benefits. For example this project born to clean out cities from plastic but with better versions it can also detect other materials. With a different hardware it can be implemented also in oceans and other locations unreachable to the human because of safety issues.
If implemented in cities it can be 100% green with a station to recharge it powered by solar energy.
|
winning
|
## Inspiration
Old-school text adventure games.
## What it does
You play as Jack and get to make choices to advance the adventure. There are several possible paths to the story and you will come across obstacles and games along the way.
## How I built it
Using Python.
## Challenges I ran into
Trying to use Tkinter to create a user interface. We ended up just doing in-text graphics.
## Accomplishments that I'm proud of
This is two of the team members' first Python project. We are proud that we made working code.
## What I learned
Catherine: learned how to code in Python (mainly the syntax)
Jennifer: how to organize code so that it produces a functioning game
Aria: finally learned how to use GitHub!
## What's next for The Text Adventure of Jack
Adding user interface and sound to make the game more visually and aurally immersive.
|
## Inspiration
My college friends and brother inspired me for doing such good project .This is mainly a addictive game which is same as we played in keypad phones
## What it does
This is a 2-d game which includes tunes graphics and much more .we can command the snake to move ip-down-right and left
## How we built it
I built it using pygame module in python
## Challenges we ran into
Many bugs are arrived such as runtime error but finally i manged to fix all this problems
## Accomplishments that we're proud of
I am proud of my own project that i built a user interactive program
## What we learned
I learned to use pygame in python and also this project attarct me towards python programming
## What's next for Snake Game using pygame
Next I am doing various python projects such as alarm,Virtual Assistant program,Flappy bird program,Health management system And library management system using python
|
## Inspiration
This was our first hackathon ever so we wanted to ease ourselves into the process and how to work as a group on a singular project.
## What it does
It simply uses scanner inputs and random number generation as its main functions. The player needs to input their next action to proceed and the randomness comes to play with a chance effect of how much damage the player or enemy will deal, along with how often health pots will drop.
## How we built it
We used java and knew that we would have to use a while loop to keep the game running over and over again.
## Challenges we ran into
One of the challenges was the chance system, we weren't quite sure how we should make it work, but decided on whether or not the random number falls within a certain range.
Another challenge was some of the team working together as we intended on different designs and such for how the game would operate.
## Accomplishments that we're proud of
Finishing our first-ever hackathon!!
## What we learned
We learned how to synergize as a team better and how to work within a very small time crunch efficiently.
## What's next for Text-Based Dungeon Adventure
We're planning to add a boss fight of some sort and maybe increase the enemys' difficulty as the dungeon progresses. Also maybe creating some new dungeons and adding graphics.
|
partial
|
## Inspiration
As the development of technology skyrockets, security and privacy become increasingly important, not only for people in the tech industry, but especially for the general public, as they are most prone to online theft, scams, and others. This has led to the development of, for example, extensive investment into cybersecurity, Web 3.0, and new regulations to protect consumers. One of the first major breakthroughs in the field of cryptography was the Enigma machine, designed and employed in World War II by the Germans to securely encrypt and decrypt sensitive military information. Because the Enigma cipher machine was such a significant historical device, we thought of it as the perfect case study for introducing people to the field of cybersecurity.
## What it does
The 3D Enigma machine rotates according to the letters inputted by the website user. Different letter inputs will result in different movements of each rotor, accurately simulating a real Enigma machine. The plain text and encrypted text are clearly shown on the page, underneath the 3D model. A table indicates the input, the output, and shows the role that each rotor plays in encoding the letter.
## How we built it
The accurate and functioning 3D model of Enigma was completely built from scratch using AutoCAD and its movement with Three.js. The clean, minimalist website was built with Node.js and Bootstrap.
## Challenges we ran into
With only a two person team, we pushed full speed ahead to complete even more work than a typical four person team. This is our passion project, our dream that we were finally able to fulfill at TreeHacks 2022, and we were absolutely willing to tirelessly work for 36 hours straight.
## Accomplishments that we're proud of
UI / UX Design
Realistic, accurate, and functioning complex 3D Enigma model
Educational graphics displaying key Enigma functionality
## What's next for 3nigma
An even more detailed 3D Enigma machine model with more functionality and customization. Explanations to information displayed on the website. Hosting the website on domain for everyone to access and test out. Reduce graphical intensity of the website and the model.
|
# enigma
*Built for Hack Western 4*
## What is this?
A secure (untested) one-time pad webapp for communicating using pre-shared private keys. Keys and messages are stored as encrypted text in a database, and clients may add messages with their own private keys, and retrieve messages stored with the same key.
Upon retrieval, messages are destroyed and the data is erased.
## Built with:
* Python
* [CherryPy](http://cherrypy.org/)
* SQL
* [Bootstrap](https://getbootstrap.com/)
* a lot of Googling and StackOverflow
## To-Do
* [ ] move the encryption and decryption client-side (a.k.a. learn JS) to prevent transmission of unencrypted data
* [ ] figure out how to generate `iv` based on the inputted code, instead of generating it at class instantiation
* [ ] figure out how to use AWS EC2 to host this so it's accessible past the local network
|
## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users.
|
losing
|
# FocusCam: The Attention-Guardian Productivity App
## About the Project
FocusCam uses computer vision and your webcam to monitor your attention levels, ensuring you stay engaged with tasks and minimizing distractions.
## Inspiration
The idea was born from our team's struggle with maintaining focus while studying and doing homework. I pondered, "What if our computers could notify us when we're losing attention?" Hence, FocusCam.
## Key Learnings
* Computer Vision Basics: Delved into how cameras detect facial nuances using vector math and tracking facial features.
* User Privacy: Ensured data was processed locally without storing any footage.
* User Interface Design: Created a user-friendly and non-intrusive interface.
## How we built it
* Tools: Utilized OpenCV for computer vision and Electron for the app interface.
* Detection Algorithm: Developed to distinguish between reading/thinking and distraction.
* Real-time Feedback: Users receive gentle reminders to refocus after a minute of distraction.
* Privacy: Data is processed locally; no footage is saved or transmitted.
* Challenges: Gaze Tracking was quite difficult initially.
## Wrap-Up
FocusCam, from a simple idea to a robust tool, showcases how tech can improve daily productivity. The creation journey highlighted the essence of persistence and user-centric design.
|
## Inspiration:
Often, many students will intend to briefly check their phone for social media, but then end up scrolling on TikTok or Instagram for countless of hours, resulting in little to no work being completed. Our project aims to prevent this behaviour by alerting the user to put down their distractions, focus back on the camera frame and promote productivity as a result.
## What it does:
Tracks facial recognition to ensure you are focused on a computer screen. If no facial recognition is present for more than 5 seconds, an alert will sound repeatedly until a face is back in the frame.
## How we built it:
This project was implemented using the openCV (cv2) library on Python, enabling the tracking of facial recognition using a camera. An XML file from online was used to help the program mathematically determine whether or not a face was in the camera frame.
## Challenges we ran into:
Often our application would mistaken background objects (i.e. lights, posters, signs) as faces, thus making it most effective in a blank background setting.
## Accomplishments that we're proud of:
This was the first Hackathon for several of us and we're proud that we were able to produce a quality project that could help many people in a limited amount of time.
## What we learned:
We learned how to use the openCV (cv2) library on Python to track live facial recognition
## What's next for Hocus Focus:
We would like to eventually release it as a Google Chrome Extension so that at anytime, anyone will be able to use it. For demonstration and judging purposes, the camera footage is displayed on the user's screen, but it will be removed when this application is released more professionally.
|
## Inspiration
We love spending time playing role based games as well as chatting with AI, so we figured a great app idea would be to combine the two.
## What it does
Creates a fun and interactive AI powered story game where you control the story and the AI continues it for as long as you want to play. If you ever don't like where the story is going, simply double click the last point you want to travel back to and restart from there! (Just like in Groundhog Day)
## How we built it
We used Reflex as the full-stack Python framework to develop an aesthetic frontend as well as a robust backend. We implemented 2 of TogetherAI's models to add the main functionality of our web application.
## Challenges we ran into
From the beginning, we were unsure of the best tech stack to use since it was most members' first hackathon. After settling on using Reflex, there were various bugs that we were able to resolve by collaborating with the Reflex co-founder and employee on site.
## Accomplishments that we're proud of
All our members are inexperienced in UI/UX and frontend design, especially when using an unfamiliar framework. However, we were able to figure it out by reading the documentation and peer programming. We were also proud of optimizing all our background processes by using Reflex's asynchronous background tasks, which sped up our website API calls and overall created a much better user experience.
## What we learned
We learned an entirely new but very interesting tech stack, since we had never even heard of using Python as a frontend language. We also learned about the value and struggles that go into creating a user friendly web app we were happy with in such a short amount of time.
## What's next for Groundhog
More features are in planning, such as allowing multiple users to connect across the internet and roleplay on a single story as different characters. We hope to continue optimizing the speeds of our background processes in order to make the user experience seamless.
|
losing
|
## Inspiration
At Hack the North 11, dreaming big is a key theme. I wanted to play off of this theme, in combination with my growing interest in AI, to create a chill platform to help people realize and achieve their best *literal* dreams!
I enjoy listening to ambient music to wind down before I go to sleep, and I'm sure I'm not the only one who does, so I wanted to create something where people could, with the help of AI, make their own custom ambient tracks to listen to before or as they're falling asleep, overall enriching their pre-sleep routine and (hopefully) having better dreams.
## What it does
The user, when making a custom track, enters up to three keywords (or "ambiances") that are then provided to the AI music generating platform soundraw.io to create ambient music that closely aligns with their desired sound. There are also pre-made "templates" of ambient tracks that a user can easily click play and listen to at their leisure.
## How we built it
AI ambient tracks were generated using [soundraw.io](https://soundraw.io/). The web app interface was built using the Vue.js framework, with additional JavaScript, HTML, and CSS code as needed.
## Challenges we ran into
In all honesty, this project was mostly to push my own limits and expand my knowledge in front-end development. I had very little experience in that area of development coming into HTN 11, so I faced a number of challenges in building an interface from scratch, such as playing around with HTML and CSS for the first time and properly creating transitions between different pages based on which icons I clicked.
## Accomplishments that we're proud of
I have never done a solo hack before, nor have I ever built a web app interface myself, so these were some major personal milestones that I am very proud of and I did a lot of learning when it came to front end development!
## What we learned
How to create a web app interface from scratch, more about AI text-to-sound.
## What's next for REMI
Future improvements could include (but are not limited to):
* Refining the appearance of the interface (i.e. making it cleaner and prettier)
* Providing actual functionality to the next song/previous song buttons
* Matching the track slider element to the actual progress of the track
* Ability to go back in and edit a custom track after generating it
* Ability to delete custom tracks
* The capability to search and add other pre-made templates
* Allowing user more freedom to customize track icons (ex: choose images for icons, rather than just colors)
|
*The entirety of this project was bootstrapped over the course of the TreeHacks weekend. Thank you so much to the organizers!*
## Inspiration
**Boring music kills.** *Let's face it: dull music can kill a good vibe, especially in restaurants or bars.* Our goal? We wanted to provide restaurants with royalty-free, unique way for generate new music that always adapts to and matches the mood of their customers, elevating their experience by creating complete audial immersion into the atmosphere of the restaurant. Picture this: a dynamic soundtrack that not only enhances the dining experience but also immerses everyone in the restaurant's unique atmosphere.
With a psychology student and a professional musician in our team, we understand the importance of music as more than just background noise, it isn't just an ambiance—it's a mood shifter. The psychologist sees music as a method to create a personalized, almost therapeutic experience for the audience, while our musician clearly knows firsthand the power of setting the right tone.
## What it does
On check-in, Jukebox takes customers' polls on their current moods and uses it to generate custom music tracks that perfectly match the energy of all customers in the restaurant. We use a **generative music AI** (Suno) to provide the perfect music for the *vibe* . When someone enters a new mood, we keep track of their input and in regular intervals, regenerate and adapt the streamed music to current customers.
## How we built it
* Writing **custom selenium code** to navigate the AI music generation website and download audio
* Developing a **custom streaming method** to fetch and blend the next songs automatically
* Building a split user/restaurant site to provide an interface for users to vote and restaurants to stream music.
* Iteratively designing and prototyping a welcoming interface in **Figma** and integrating front-end.
* **Deploying our app onto Firebase** for easy access from any device
## Challenges we ran into
* **No straightforward API** to download/access AI-generated music. This meant we had to spend a lot of time finding workarounds that didn't sacrifice audio quality.
* Maintaining a continuous stream of different music segments and smoothly linking those generated songs together.
## Accomplishments that we're proud of
* We developed a **custom streaming method** to stream limitless server-sided music files to our client side.
* We implemented a method for **live download and upload of AI music**
* Clean, **well-designed** UI experience for ease of use on customer and restaurant side.
* Intentional design into **security and reliability.**
In total, we are really proud to have been able to integrate so many parts of system into a deployed solution we can test with and demo like with crowds on demo day!
## What we learned
Lesson learned: Streaming is hard! (Watch your head(ers)!) We had to navigate some messy sites in hacky ways to get the job done. **Most sites are not built cleanly for selenium interactions.** Lastly, security is **hard.**
## What's next for Jukebox
We never settle:
* Further development in the separation of the API into many restaurants design
* Expansion into the **therapeutic aspect of music generation**
* Further development on **audio quality** and consistency
|
## Inspiration
What if you could automate one of the most creative performances that combine music and spoken word? Everyone's watched those viral videos of insanely talented rappers online but what if you could get that level of skill? Enter **ghostwriter**, freestyling reimagined.
## What it does
**ghostwriter** allows you to skip through pre-selected beats where it will then listen to your bars, suggesting possible rhymes to help you freestyle. With the 'record' option, you can listen back to your freestyles and even upload them to share with your friends and listen to your friend's freestyles.
## How we built it
In order to build **ghostwriter** we used Google Cloud Services for speech-to-text transcription, the Cohere API for rhyming suggestions, Socket.io for reload time communication between frontend and backend, Express.js for backend, and the CockroachDB distributed SQL database to store transcription as well as the audio files. We used React for the fronted and styled with the Material UI library.
## Challenges we ran into
We had some challenges detecting when the end of a bar might be as different rhyming schemes and flows will have varying pauses. Instead, we decided to display rhyming suggestions for each word as the user then has the freedom to determine when they want to end their bar and start another. Another issue we had was figuring out the latency of the API calls to make sure the data was retrieved in time for the user to think of another bar. Finally, we also had some trouble using audio media players to record the user's freestyle along with the background music, however, we were able to find a solution in the end.
## Accomplishments that we're proud of
We are really proud to say that what we created during the past 36 hours is meeting its intended purpose. We were able to put all the components of this project in motion for the software to successfully hear our words and to generate rhyming suggestions in time for the user to think of another line and continue their freestyle. Additionally, using technologies that were new to us and coding away until it reached our goal expanded our technological expertise.
## What we learned
We learned how to use react and move the text around to match our desired styling. Next, we learned how to interact with numerous APIs (including Cohere's) in order to get the data we want to be organized in the way most efficient for us to display to the user. Finally, we learned how to freestyle better a bit ourselves.
## What's next for Ghostwriter
For **ghostwriter**, we aim to have a higher curation for freestyle beats and to build a social community to highlight the most fire freestyles. Our goal is to turn today's rappers into tomorrow's Hip-Hop legends!
|
losing
|
## Inspiration
In a land rife with terrible disasters, mental health concerns, and buggy code, one savior rises above them all. That is the MemeBot, the one who will comfort you in your times of need.
Memes work together to provide a satirical outlook on the world, allowing its audience to maintain a critical perception of the issues surrounding them. We thought: what better way to streamline their outreach than by creating a chat bot that will give them to you whenever you need?
## What it does
The MemeBot will send you an assortment of top hand-picked memes no matter what way you ask it (and it will learn from its mistakes through basic machine-learning). It will also send you compliments if you say anything else. There is an associated website with links to the facebook page and the github repo, hosted on heroku.
## How we built it
*Chat Bot:* The foundation of the chat bot is built on Python Flask, the Messenger API, the Facebook API, and the wit.ai API. The user IDs of each user were each provided by the Messenger API. Then, we used Flask pymessenger to let us listen for and send messages to the user IDs. We used the wit.ai API as a basic machine-learning API to train our chat bot to detect whether the user was asking for memes (in multiple different ways; in phrases, with different spellings, etc.). If the user was not asking for a meme, MemeBot would send them a compliment (of different variations). We also can retrieve the posts with the most likes (and their attached memes) by using the Facebook API.
*Subscription feature:* Using MongoDB's MLab features, our goal was to create unique schedules for each user. This means that the user could possibly subscribe to the MemeBot, sign up for daily/chosen day memes to be sent, and otherwise "schedule" all their meme/positive compliment interactions between MemeBot. We created a NoSQL database which stored Facebook's User IDs and ideal meme times/scheduled days. We controlled the data using PyMongo and using python's Threading Timers to schedule multiple events at once.
*Collecting Memes:*
We used Facebook Graph API to "log on" to Facebook using a user token, and then access a meme page token. From the meme page token, we gather a given number of the most recent posts (specified by the user). Then, we extract specific information: the meme image urls, picture preview, and total likes, from the page token. This allowed us to gather the most popular/recent memes and send them to the user.
*Website:*
The website was built using HTML and CSS. We modified a basic template in order to include memes to give users a preview of what the MemeBot is about. We used fontawesome for the github and facebook icons, which lead to their respective websites. The memes on the website scroll to the left and loop around in order to set the "feel" of a "meme" website.
## Challenges we ran into
The biggest challenge we faced was that our team members had different coding environments. Some team members coded in python2 while others coded in python3, and we struggled to compile correctly. Every time we ran into a mergeconflict or change in code, we had to change it back depending on the computer and user. This problem was also not understood until after we finished our respective sections of the project. We eventually solved the problem by realizing that the problematic code could be run on python2, but not python3, and therefore switching on the computer to run on python2.
The Facebook Graph API was also confusing to use, especially since it was hard to find developer code to follow. We had to search around and try a few options to use, and ran into a lot of bugs within the code. It also took us quite a bit of time to understand much of the syntax used in the Graph API and convert it to python code.
## Accomplishments that we're proud of
I think the most impressive part was that when we were faced with dividing up the work into smaller, individual parts with a new team, we all took initiative to add our own functionalities. Amazingly, we successfully distributed the tasks and were able to coordinate using git in order to bring the different components together in order to make a multifaceted MemeBot with a website and parts with different purposes. We were proud of all being able to learn multiple APIs, languages, and servers on the spot including hosting our chatbot on heroku, which lets users from anywhere in the world to visit our website and chat with our bot.
When we first decided on our topic, we were probably frustrated the most with the Facebook Graph API and had many difficulties using it. At one point, we were considering using a different API and changing our MemeBot to Slack or other servers, but we eventually persevered until we got the Facebook Graph API to function.
## What we learned
Throughout this hackathon, a small, funny idea quickly snowballed into an incredible learning experience. As we came up with additional features, we realized we had to learn a variety of APIs and tools. From past experience, we knew right off the bat that we needed to have a solid understanding of Git and put in the time to test and coordinate. When we wanted to parse through a page's data, we looked into Facebook's Graph API. In order to develop a subscription service, we learned to create a NoSQL database and using threads to address multiple users at once. The most exciting learning experience was implementing machine learning through natural language processing, especially after attending the Neural Network workshop.
## What's next for MemeBot
MemeBot lives with a bright future. It will expand to be able to include memes from other public pages and be able to sift among the public pages to gather the best memes within Facebook. It could also expand to other public sites including Reddit, Instagram, Google, and iFunny. We will also improve the subscription service so that they can receive messages from *topics of interest* daily, in which a user can type in a word/phrase/sentence, and MemeBot would send a group of memes that best fits that word/phrase/sentence. We will also improve the website to include more memes and functionality, such as a share button and login/logout section.
|
## About Memeify
With the press of a button, a new, original – and deep-fried – meme is generated from the depths of a random-image generator site and OpenAI, and it's guaranteed to be funny!
This website generates memes with top and bottom text inspired by the image, afflicted with varying levels of deep-frying to make them extra beautiful.
## How it was made
Memeify has a Python backend that first loads an image from a random-image generator site, then uses OpenAI's computer vision to identify the image and OpenAI's text generation to create the meme. We also implemented the Pillow Library to add the text to and filter the image. The front-end is mainly created using JavaScript, with a combination of HTML + CSS and React.js to help us create the glorious website as seen in the images.
## Challenges along the way
As with most full-stack developments, connecting the JavaScript frontend to the Python backend was the hardest part – it involved learning how to use fetch by React.js and Flask to send data and communicate between the backend and front-end. Secondly, none of us had used scripts that called AI models before, so that was also a learning curve. Overall, we are happy with how our project turned out and grateful to have learned so much along the way.
|
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data>
|
losing
|
## Inspiration
Polycystic Ovary Syndrome (PCOS) is a disorder that affects 5-10% of the female population due to an imbalance of hormones. Women that experience PCOS have an increased risk of type 2 diabetes, high blood pressure, high cholesterol, anxiety, and depression. Like a lot of women's disorders, it’s common for PCOS to receive a delayed or no diagnosis at all due to lack of awareness of PCOS symptoms and the wide-range of symptoms. As a consequence, many women unknowingly suffer from the health risks without proper treatment.
We have developed an application for the You.com search engine that compares your health data with information about women's health from your searches, allowing you to monitor your health status and identify potential indicators of PCOS.
## What it does
When you search anything related to women’s sexual and reproductive health, our application pops up with 3 key features: (1) providing comprehensive information on PCOS by web scrapping, (2) comparing these symptoms with information from your health app, (3) donation feature that allows you to contribute to organizations dedicated to providing resources to women with the condition.
## How we built it
Using You.com's Developer Dashboard, we utilized their editor to design the user interface, incorporating two APIs for personalized health data and generalized PCOS information. Furthermore, we integrated checkbook.io to enable donations directly to community organizations just with payee info!
## Challenges we ran into
We encountered challenges while incorporating the API into the You.com codebase, primarily due to the limitations of the "Form" components and difficulties placing components precisely as desired. Additionally, the integration of Checkbook.io was challenging due to the steps involved with user authentication and bank account creation.
## Accomplishments that we're proud of
The donation app tile is fully functional and we can track the donations given by the user through email and the Sandbox environment.
## What we learned
We learned about how to create and integrate APIs, front-end development using You.com, and functionality of HTTP POST and GET methods while deepening our knowledge of PCOS and its impact on women's health.
## What's next for YouCare
We aim to expand this tool to include more topics in women's health like STDs, pregnancy and sex education, each with their unique features to improve awareness. It’s capabilities can further develop to address all queries that deal with women’s health like providing advice on topics related to women's health, like periods or menstrual products. For instance, it can show you a summary of your cycle and suggest products that might be useful for you, and physicians you should consider etc.
|
Introducing Melo-N – where your favorite tunes get a whole new vibe! Melo-N combines "melody" and "Novate" to bring you a fun way to switch up your music.
Here's the deal: You pick a song and a genre, and we do the rest. We keep the lyrics and melody intact while changing up the music style. It's like listening to your favourite songs in a whole new light!
How do we do it? We use cool tech tools like Spleeter to separate vocals from instruments, so we can tweak things just right. Then, with the help of the MusicGen API, we switch up the genre to give your song a fresh spin. Once everything's mixed up, we deliver your custom version – ready for you to enjoy.
Melo-N is all about exploring new sounds and having fun with your music. Whether you want to rock out to a country beat or chill with a pop vibe, Melo-N lets you mix it up however you like.
So, get ready to rediscover your favourite tunes with Melo-N – where music meets innovation, and every listen is an adventure!
|
## Inspiration
That's all.
## What it does
## How we built it
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
Ember. Working with the front end/back end and Firebase.
## What's next for ClipIt
The world.
|
winning
|
## Inspiration
Many students rely on scholarships to attend college. As students in different universities, the team understands the impact of scholarships on people's college experiences. When scholarships fall through, it can be difficult for students who cannot attend college without them. In situations like these, they have to depend on existing crowdfunding websites such as GoFundMe. However, platforms like GoFundMe are not necessarily the most reliable solution as there is no way of verifying student status and the success of the campaign depends on social media reach. That is why we designed ScholarSource: an easy way for people to donate to college students in need!
## What it does
ScholarSource harnesses the power of blockchain technology to enhance transparency, security, and trust in the crowdfunding process. Here's how it works:
Transparent Funding Process: ScholarSource utilizes blockchain to create an immutable and transparent ledger of all transactions and donations. Every step of the funding process, from the initial donation to the final disbursement, is recorded on the blockchain, ensuring transparency and accountability.
Verified Student Profiles: ScholarSource employs blockchain-based identity verification mechanisms to authenticate student profiles. This process ensures that only eligible students with a genuine need for funding can participate in the platform, minimizing the risk of fraudulent campaigns.
Smart Contracts for Funding Conditions: Smart contracts, powered by blockchain technology, are used on ScholarSource to establish and enforce funding conditions. These self-executing contracts automatically trigger the release of funds when predetermined criteria are met, such as project milestones or the achievement of specific research outcomes. This feature provides donors with assurance that their contributions will be used appropriately and incentivizes students to deliver on their promised objectives.
Immutable Project Documentation: Students can securely upload project documentation, research papers, and progress reports onto the blockchain. This ensures the integrity and immutability of their work, providing a reliable record of their accomplishments and facilitating the evaluation process for potential donors.
Decentralized Funding: ScholarSource operates on a decentralized network, powered by blockchain technology. This decentralization eliminates the need for intermediaries, reduces transaction costs, and allows for global participation. Students can receive funding from donors around the world, expanding their opportunities for financial support.
Community Governance: ScholarSource incorporates community governance mechanisms, where participants have a say in platform policies and decision-making processes. Through decentralized voting systems, stakeholders can collectively shape the direction and development of the platform, fostering a sense of ownership and inclusivity.
## How we built it
We used React and Nextjs for the front end. We also integrated with ThirdWeb's SDK that provided authentication with wallets like Metamask. Furthermore, we built a smart contract in order to manage the crowdfunding for recipients and scholars.
## Challenges we ran into
We had trouble integrating with MetaMask and Third Web after writing the solidity contract. The reason was that our configuration was throwing errors, but we had to configure the HTTP/HTTPS link,
## Accomplishments that we're proud of
Our team is proud of building a full end-to-end platform that incorporates the very essence of blockchain technology. We are very excited that we are learning a lot about blockchain technology and connecting with students at UPenn.
## What we learned
* Aleo
* Blockchain
* Solidity
* React and Nextjs
* UI/UX Design
* Thirdweb integration
## What's next for ScholarSource
We are looking to expand to other blockchains and incorporate multiple blockchains like Aleo. We are also looking to onboard users as we continue to expand and new features.
|
# Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
|
## Inspiration
All four of us are university students and have had to study remotely due to the pandemic. Like many others, we have had to adapt to working from home and were inspired to create something to improve WFH life, and more generally life during the pandemic. The pandemic is something that has affected and continues to affect every single one of us, and we believe that it is particularly important to take breaks and look after ourselves.
It is possible that many of us will continue working remotely even after the pandemic, and in any case, life just won’t be the same as before. We need to be doing more to look after both our mental and physical health by taking regular breaks, going for walks, stretching, meditating, etc. With everything going on right now, sometimes we even need to be reminded of the simplest things, like taking a drink of water.
Enough of the serious talk! Sometimes it’s also important to have a little fun, and not take things too seriously. So we designed our webpage to be super cute, because who doesn’t like cute dinosaurs and bears? And also because, why not? It’s something a little warm n fuzzy that makes us feel good inside, and that’s a good enough reason in and of itself.
## What it does
Eventy is a website where users are able to populate empty time slots in their Google Calendar with suitable breaks like taking a drink of water, going on a walk, and doing some meditation.
## How we built it
We first divided up the work into (i) backend: research into the Google Calendar API and (ii) frontend: looking into website vs chrome extension and learning HTML. Then, we started working with the Google Calendar API to extract data surrounding the events in the user’s calendar and used this information to identify where breaks could be placed in their schedule. After that, based on the length of the time intervals between consecutive events, we scheduled breaks like drinking water, stretching, or reading. Finally, we coded the homepage of our site and connected the backend to the frontend!
## Challenges we ran into
* Deciding on a project that was realistic given our respective levels of experience, given the time constraints and the fact that we did not know each other prior to the Hackathon
* Configuring the authorization of a Google account and allowing the app to access Google Calendar data
* How to write requests to the API to read/write events
+ How would we do this in a way that ensures we’re only populating empty spots in their calendar and not overlapping with existing events?
* Deciding on a format to host our app in (website vs chrome extension)
* Figuring out how to connect the frontend of the app to the backend logic
## What we learned
We learned several new technical skills like how to collaborate on a team using Git, how to make calls to an API, and also the basics of HTML and CSS.
|
winning
|
# SmartKart
A IoT shopping cart that follows you around combined with a cloud base Point of Sale and Store Management system. Provides a comprehensive solution to eliminate lineups in retail stores, engage with customers without being intrusive and a platform to implement detailed customer analytics.
Featured by nwHacks: <https://twitter.com/nwHacks/status/843275304332283905>
## Inspiration
We questioned the current self-checkout model. Why wait in line in order to do all the payment work yourself!? We are trying to make a system that alleviates much of the hardships of shopping; paying and carrying your items.
## Features
* A robot shopping cart that uses computer vision to follows you!
* Easy-to-use barcode scanning (with an awesome booping sound)
* Tactile scanning feedback
* Intuitive user-interface
* Live product management system, view how your customers shop in real time
* Scalable product database for large and small stores
* Live cart geo-location, with theft prevention
|
## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
|
## Inspiration
Ever since the creation of department stores, if a customer needed assistance, he or she would have to look for a sales representative. This is often difficult and frustrating as people are constantly on the move. Some companies try to solve this problem by implementing stationary machines such as kiosks. However, these machines can only answer specific questions and they still have to be located. So we wanted to find a better way to feasibly connect customers with in-store employees.
## What it does
Utilizing NFC technology, customers can request help or find more information about their product with just a simple tap. By tapping their phone on the price tag, one's default browser will open up and the customer will be given two options:
**Product link** - Directly goes to the company's website of that specific product
**Request help** - Sends a request to the in store employees notifying them of where you tapped (eg. computers aisle 5)
The customer service representative can let you know that he or she is on the way with the representative's face displayed on the customer's phone so they know who to look for. Once helped, the customer can provide feedback on the in-store employee. Using Azure Cognitive Services, the customer's feedback comments will be translated to a score between 0-100 in which the information will be stored. All of this can be performed without an app, just tapp.
## How We Built It
The basis of the underlying simplicity for the customer is the simple "tap" into our webapp. The NFC sticker stores a URL and the product details - allowing users to bypass the need to install a 3rd party app.
The webapp is powered by node.js, running on a Azure VM. The staff members have tablets that connect directly to our database service, powered by Firebase, to see real-time changes.
The analytics obtained by our app is stored in Azure's SQL server. We use Azure Cognitive Services to identify sentiment level of the customer's feedback and is stored into the SQL server for future analysis for business applications.
## Challenges We Ran Into
Finding a way to record and display the data tapp provides.
## Accomplishments that We're Proud of
* Learning how to use Azure
* Having the prototype fully functional after 36 hours
* Creating something that is easy to use and feasible (no download required)
## What I learned
* How to integrate Azure technology into our apps
* Better understanding of NFC technology
* Setting up a full-stack server - from the frontend to the backend
* nginx reverse proxy to host our two node apps serving with HTTPS
## What's next for Tapp
* We are going to improve on the design and customization options for Tapp, and pitch it to multiple businesses
* We will be bringing this idea forward to an entrepreneurship program at UBC
|
winning
|
## Inspiration
We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two.
## What it does
Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal.
## How we built it
Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions.
Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules.
Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world.
## Challenges we ran into
The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth.
The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion.
The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage.
The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa.
## Accomplishments that we're proud of
The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops.
Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles .
## What we learned
We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience.
## What's next for Todd: The Inter-dimensional Bot
Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
|
## Inspiration
This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen.
## What it does
It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall.
## How we built it
We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident.
## Challenges we ran into
Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax.
To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were:
1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact).
2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact).
## Accomplishments that we're proud of
We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole.
## What we learned
We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process.
## What's next for VATS
Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
|
## Inspiration
Since we are all stuck at home, it seemed like a good time to bring out the old games we used to play as kids. We are bringing back the wooden labyrinth game but with a modern twist.
## What it does
Similar to the classic wooden labyrinth game, you are to guide your marble (in this case, your bunny) from start to finish. On your journey, you will have to move the joystick in different directions to avoid the holes and dead ends. So have fun watching your bunny hop from side to side when you tilt, and please don’t kill it...
## How we built it
Our A-MAZE-ing labyrinth is created out of two Arduino Uno's. Each Arduino communicates through Bluetooth transceivers and one acts as a sender while the other acts as the receiver. The sending end uses a joystick shield that controls the labyrinth with the analog sticks. An OLED screen is attached to the joystick for fun animations while the game is running. On the other end, the receiver side uses two servo motors and two QTI sensors. The motors help maneuver the labyrinth while the QTI sensors sense for the marble. If it falls into the wrong hole, one sensor will send a signal over to play a sad/angry emoji. When the marble successfully makes it to the end, a different sensor tells the OLED to play the winning animation.
## Challenges we ran into
While creating this project, we ran into both hardware and software problems. For the software side, we ran into issues where the code would not talk to each other through the Bluetooth modules. Information that was sent over from the sender side didn't match on the receiving end and this problem took a bit longer than anticipated to fix. On the hardware side, the main problem was getting the QTI sensors to detect the marble moving at a fast pace. This problem was tackled when creating a few tubes to guide the marble when it dropped into a hole.
## Accomplishments that we're proud of
We are proud that we were able to complete the model of our labyrinth. Besides that, we are both satisfied that we completed our first hackathon.
## What we learned
We learned that combining items together can cause a lot of problems. When adding the OLED with the motors and detection, any delays that were added to the animations would have to be completed before anything else would go on.
## What's next for our A-MAZE-ing Labyrinth
In the future, we want to redesign our model to make it more visually appealing for the user. Looking even further down the line, it would be a huge achievement to see our product sold in stores and online to beginners and coders of all ages.
|
winning
|
Getting rid of your social media addiction doesn't have to be a painful process. Turn it into a fun challenge with your friends and see if you can win the pot!
## Inspiration
We recognize that social media addiction is a very real issue among today's youth. However, no steps are taken to curb these addictions as it is a painful and unrewarding process. We set out to fix this by turning the act of quitting social media into a fun group activity with a touch of monetary incentives. 😉
## What it does
Start a party with your friends and contribute a small amount of money into the party's prize pool. Once the showdown starts, no accessing social media! Whoever lasts until the end will win the pot!
Users will top up their account using Interac payments (thanks to the Paybilt API) and they will then start a showdown for a certain period of time, with each contributing some amount of money to the prize pool. The winners will split the pot amongst them and if they so choose they can pay out their account balance using Interac (once again thanks to Paybilt).
If you lose the showdown, no worries! Our AI feature will ~~name and shame~~ gently scold you so that you are encouraged to do better!
## How we built it
We used **React.js** for the frontend, and **Express.js** for the backend with **Prisma** ORM to handle our database.
We also used **Paybilt**'s API to handle monetary transactions as well as **Meta**'s API to fetch user data and link accounts through the OAuth & Webhook API's.
**Cohere** was used to generate dynamic status messages for each game.
Our backend is hosted on **Google Cloud Platform** and our client is hosted on **GitHub Pages**. Our domain is a free .tech domain from MLH (thanks!!).
## Challenges we ran into
Meta's API documentation was very obscure, partially inaccurate, and difficult to implement. The meta API integration portion of our platform took one person the entire duration of the hackathon to work out.
Before this hackathon, we were unfamiliar with Paybilt, especially since their API is private. During this hackathon, we ran into some challenges using their API and receiving callbacks. However in the end we were able to successfully integrate Paybilt into our platform.
## Accomplishments that we're proud of
We are proud of creating a fully functional platform that not only contains the core features necessary for a minimum viable product, but implements additional ✨fun✨ features to enhance the experience such as AI-generated messages in the party.
## What we learned
We learned how to use Paybilt, Meta's Webhook & Oauth API's, and Cohere. Overall we found this an enjoyable experience as we all gained a lot of knowledge and experience without sacrificing our project or our vision.
## What's next for Screentime Showdown
We plan to integrate other platforms such as TikTok and YouTube Shorts and introduce a more robust set of party configurations to suit various needs (including the ability to toggle how aggressive you want the shaming to be).
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
|
partial
|
## Inspiration
PrimeTime is a platform that connects sports fans! PrimeTime is a platform that connects sports fans in global and local regions. Due to the current COVID19 Pandemic and the social restrictions that were placed globally, sports fans have been unable to bond and discuss their favorite teams/sports/athletes with other fellow fans to cheer on their teams in local venues or restaurants. As avid sports fans ourselves, we were incredibly disheartened to not be able to engage with other fans like in the past. Therefore, this platform was created to allow sports fans like ourselves to stay engaged with the fan community.
## What it does
Primetime allows you to view a social map of other sports fans locally and around the world to view their sports interests and shows you a feed of live scores where you can discuss your favorite team, athlete, or sport with other fans.
## How we built it
We utilized API-Football to create a live scores feed where users can view live scores and talk with other fans. We also used the Google Maps API to create the social map of fans where users can identify where other sports fans are located and what team/sports/athletes they are interested in. The frontend used NextJS/React and the backend used Firebase/Firestore.
## Challenges we ran into
We initially planned to use the arcGIS API to create the social map, but we ran into a ton of issues integrating that with Firebase, and unfortunately, even the Esri mentors were unable to diagnose our issue, so we had to make a very late switch to the Google Maps API. Additionally, none of us had ever worked with NextJS or firebase previously, and only one team member had a little experience with react, so we had to pretty much learn as we went along! Additionally, we all live in different time zones and have other ommitments to attend so sometimes communication was delayed.
## Accomplishments that we're proud of
* Managing to get something working
* Learning new technologies like Next.js and Firestore
* Getting the social map working
* Being able to collaborate across multiple time zones
## What we learned
* Time management is key!
* Always have a plan B in case something goes wrong
* How to use NextJS, React Hooks, Google Map's API, and Firestore
* How to effectively communicate with teammates
## What's next for PrimeTime
We want to add additional social functionality to let users friend each other and develop the social location aspect of it more too. We had a bunch of ideas that we didn't have the time to implement. For virtual interactions, we can add features such as chatbox and feed reposts. We also want to promote more in-person interactive features for post-COVID activities such as creating friendly local tournaments and offline fan meetups.
|
## Inspiration
Our idea begins with the fact that travelers leave their own messages in scenery places. Why don’t we create an AR function that displays these messages whenever other travelers open their phones? We then extend our target to every person who wants a platform to share their current feelings with people who have experienced the same place or event. Thus Beside is created, aiming to connect you with people who have been “beside” you in the past and present.
## Functionality
Our application has two main features. From the AR environment, people can view all the messages written by users who have been to the same location, presented in 3D objects. The user can interact with others through browsing the notes, leaving comments, and posting notes themselves. No matter it is a food recommendation, a random rant, or a meaningful story, you can share them with people throughout the world who pass by the same location. In the map feature, a collection of all notes written in a certain area would be represented, and top-trending topics would be visible on the map. The user can participate in discussions in other areas while getting informed of the trend.
## Challenges & Achievements
We divided our team into two groups, an AR group, and a map group. Without any prior experience in swift, AR development, or front-end development, we spent a challenging and meaningful day learning from the basics to advanced functions using unfamiliar languages. The main problems the AR groups tackled is rendering 3D notes, correcting directions of the notes, and modeling the notes. We successfully adjusted the note directions by adding an interactive tap function that rotates the notes.
As for the map interface, the first major challenge we encountered was the technical specifics of react.js. Since we are all beginner hackers and have no experience in react.js or JavaScript in general, it is really hard to understand and master the syntax and concept behind the language/framework. An example of such challenges is the objects in JavaScript, which is completely different from Java, which we are more comfortable with. The second challenge is that when we try to make some API calls of ‘google-map-react’, the official documentation is a little bit ambiguous that we have to dig into the source code and try to understand how to use the APIs.The third problem is that during the early implementation stage of the project, we don’t have sufficient reliable data to run a comprehensive test as we develop it. We had to come up with some simple test data in order to test and debug.
## Future Plans
As our project is still in the prototype stage, we plan on completing the functions that we envisioned in the future, such as posting and commenting on posts. If we can successfully develop the product, we hope to promote our application among Berkeley students and take the first step in launching a real AR driven social platform.
|
## Inspiration
We message each other every day but don't know the sheer distances that these messages have to travel. As our world becomes more interconnected, we wanted a way to appreciate the journeys all of our communications go through around the globe.
We also wanted to democratize the ability to fact check news by going directly to the source: the people of the country in concern. This would allow us to tackle the problem of fake news while also jumpstarting constructive conversations between serendipitous pen pals worldwide.
## What it does
Glonex is a visualization of the globe and the messages going around it. You can search the globe for a city or area you are interested in contacting, and send a message there. Your message joins the flight of paper airplanes orbiting the Earth until it touches down in its target destination where other users can pick up your letter, read it, and then send one back.
We tackled our news objective by adding the ability to see news in other areas of the world at a click, and then ask questions to the inhabitants right afterward.
You can also donate to our mission using the Checkbook API to keep the site running.
## How we built it
We used Svelte, a performant JavaScript framework that operates as a compiler with a memoized DOM rather than a Virtual DOM (like React/Vue/Angular) to *increase performance* and drastically *decrease js bundle size*. This was a necessary concern because we knew we would be using the Esri ArcGIS API with a visualization of the globe (which is and it would become quite slow if the Javascript framework that we used took up too much memory). We then worked on getting the Esri ArcGIS SceneView for the globe working, we used a custom basemap from NASA that showed the cities of the world at night to create a pleasing aesthetic.
We wrote code to calculate a geodesic around the Earth that spans between the user's current location and where they click, but then interpolates the elevation over time to create an arc around the world.
Then we worked on the Firebase Firestore integration where you can send a message to any geopoint on click. Then we had to create the paper airplanes for each message based on their timestamp created and timestamp for when they should arrive at their destination. Clientside we interpolate between the start and end locations to create the positions of each airplane, and then every timestep: move them toward their destinations by changing the geometry of each graphic in their GraphicsLayer.
In order to get the news at a specific location, we created an algorithm to scrape Google News for a search query related to that specific location. We developed this using Node.js & Express.js and hosted this as a separate web service. The front end calls our news api whenever the user clicks on a location on the globe. The api then finds all news articles relevant to the location and serves it back to the front end.
## Challenges we ran into
The hardest parts of the frontend of this project include creating the arcs that span the earth whenever you click a new position. We had to do a lot of math to figure out that we could trim a part of a geodesic (great circle) of the earth, and then construct a 3d polyline that interpolated each vertex's elevation by its distance to the target destination to form an arc shape.
Another challenge was the creative use of the Esri ArcGIS API for creating the moving paper airplanes. The GraphicsLayers aren't supposed to be moving, but we needed them to for this project, and we accomplished that efficiently by, on creation time (when sending to firestore), calculating the heading of each paper airplane, and then on each frame, adding that heading to its geometry (position) each frame multiplied by the speed calculated from the difference of its start and arrival times. With that code we were able to create the airplanes orbiting Earth.
Another challenge we faced was running the news search. Most existing news apis we found only have the option to report news by country. In order to get a more detailed view of local news by city, we ended up scraping google news search for our articles. Our implementation uses puppeteer (opening up Chromium), so we could not run this in the browser along with our other front-end code. We got around this by creating a separate web service hosted elsewhere, so that our front-end could call the api without having to worry about browser-compatibility.
## Accomplishments that we're proud of
* We are able to search and see the news all around the world just by clicking on any part of the globe and being able to send messages and chat with other people in that area of the world.
* We have the frontend of the globe running perfectly with the different brightness levels in any part of the world, and the message's plane all around the globe.
* We could figure out the Math to create the arcs that span the earth whenever you click a new position, creating the moving paper airplanes, and many of the front-end features.
## What we learned
* Svelte and how to implement all of the APIs that we used (Esri ArcGIS API and Checkbook API) using Svelte.
* Integrate all of our back-ends and databases using Firebase
* Using Math to figure out the front-end part of this web-app.
* Being able to web scrap the news and make our front-ends could call the news API to search the news all over the globe
## What's next for Glonex
We definitely would continue working on this project since we believe that this is a very useful thing if a web application like this exists in this world. Not only that, we could develop more into the mobile app version so the user can use it easier.
|
losing
|
## Inspiration:
seeing and dealing with rude and toxic comments on popular forums like youtube, reddit, and being aware that sometimes it might be you who leaves that rude comment, and you may not even realize it.
## What it does:
This chrome extension warns you and reminds you not to be too heated if it finds that you are in the process of leaving a particularly rude or toxic comment using google's perspective api - an NLP algorithm for analyzing sentiment. It reads the users comment into an editable text box field in real time, and is able to inform them if their comment is above the threshold before it is posted.
## How I built it
* JS, POST requests to Perspective API, Local Node.js instance
## Challenges I ran into
* Found it difficult to figure out how to find when a user is typing a comment - what text fields are activated? When do we collect a users input? Also, sometimes we spent a lot of time on something just to find out that it was made by someone else already.
## Accomplishments that I'm proud of:
Was able to get a working extension running on localhost using js and node.js, none of us had substantial experience in either coming into this hackathon.
## What I learned Learned
a lot about javascript, how to build an extension, how frustrating creating an extension can be, but how fun hackathons are!
## What's next for TypeMeNot2
Improving the graphics - as of right now, we have a full on alert for toxicity above a certain threshold, but we want to make better representation such as a color fader with a multiplier based off of the toxicity score. Example: icon is bright red for extremely offensive comments, and dark blue for non offensive ones.
|
## A bit about our thought process...
If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own.
That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to!
## What does it even do
It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:
**1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!
**2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms
## The Fun Part
Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics.
One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate!
There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created!
## What we learned
We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
|
## What it does
Scans a clothing garment tag with the help of Google Vision AI API and outputs information on CO2 produced and water used in the garment's production and shipment. Additionally, a rating is returned for specific brands based on their ethical practices.
## How we built it
A image of a clothing tag is taken on your phone, and the image metadata is sent to a Flask endpoint. The image is read with the help of Google Vision AI API and parsed into JSON to be sent to a Python algorithm to generate scores and water/CO2 values. These numbers are sent back to the React Native mobile app in the form of JSON where it is displayed.
## Challenges we ran into
Sending image metadata to the Flask endpoint, React Native (as a whole lol), and processing the metadata back on the mobile app.
## What we learned
Machine learning basics, React Native, Flask, algorithm creation.
|
partial
|
## Inspiration
An individual living in Canada wastes approximately 183 kilograms of solid food per year. This equates to $35 billion worth of food. A study that asked why so much food is wasted illustrated that about 57% thought that their food goes bad too quickly, while another 44% of people say the food is past the expiration date.
## What it does
LetsEat is an assistant that comprises of the server, app and the google home mini that reminds users of food that is going to expire soon and encourages them to cook it in a meal before it goes bad.
## How we built it
We used a variety of leading technologies including firebase for database and cloud functions and Google Assistant API with Dialogueflow. On the mobile side, we have the system of effortlessly uploading the receipts using Microsoft cognitive services optical character recognition (OCR). The Android app is writen using RxKotlin, RxAndroid, Retrofit on a MVP architecture.
## Challenges we ran into
One of the biggest challenges that we ran into was fleshing out our idea. Every time we thought we solved an issue in our concept, another one appeared. We iterated over our system design, app design, Google Action conversation design, integration design, over and over again for around 6 hours into the event. During development, we faced the learning curve of Firebase Cloud Functions, setting up Google Actions using DialogueFlow, and setting up socket connections.
## What we learned
We learned a lot more about how voice user interaction design worked.
|
## Inspiration
We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in.
## What it does
You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it.
## How I built it
We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's
## Challenges I ran into
Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow.
## Accomplishments that I'm proud of
The excellent UI design along with the amazing outcomes that can be produced from the translation of slang
## What I learned
A lot of things we learned
## What's next for SlangSlack
We are going to transform the way today's menials keep up with growing trends in slang.
|
## Inspiration
Each year, over approximately 1.3 billion tonnes of produced food is wasted ever year, a startling statistic that we found to be truly unacceptable, especially for the 21st century. The impacts of such waste are wide spread, ranging from the millions of starving individuals around the world that could in theory have been fed with this food to the progression of global warming caused by the greenhouse gases released as a result of emissions from decaying food waste. Ultimately, the problem at hand was one that we wanted to fix using an application, which led us precisely to the idea of Cibus, an application that helps the common householder manage the food in their fridge with ease and minimize waste throughout the year.
## What it does
Essentially, our app works in two ways. First, the app uses image processing to take pictures of receipts and extract the information from it that we then further process in order to identify the food purchased and the amount of time till that particular food item will expire. This information is collectively stored in a dictionary that is specific to each user on the app. The second thing our app does is sort through the list of food items that a user has in their home and prioritize the foods that are closest to expiry. With this prioritized list, the app then suggests recipes that maximize the use of food that is about to expire so that as little of it goes to waste as possible once the user makes the recipes using the ingredients that are about to expire in their home.
## How we built it
We essentially split the project into front end and back end work. On the front end, we used iOS development in order to create the design for the app and sent requests to the back end for information that would create the information that needed to be displayed on the app itself. Then, on the backend, we used flask as well as Cloud9 for a development environment in order to compose the code necessary to help the app run. We incorporated image processing APIs as well as a recipe API in order to help our app accomplish the goals we set out for it. Furthermore, we were able to code our app such that individual accounts can be created within it and most of the functionalities of it were implemented here. We used Google Cloud Vision for OCR and Microsoft Azure for cognitive processing in order to implement a spell check in our app.
## Challenges we ran into
A lot of the challenges initially derived from identifying the scope of the program and how far we wanted to take the app. Ultimately, we were able to decide on an end goal and we began programming. Along the way, many road blocks occurred including how to integrate the backend seamlessly into the front end and more importantly, how to integrate the image processing API into the app. Our first attempts at the image processing API did not end as well as the API only allowed for one website to be searched at a time for at a time, when more were required to find instances of all of the food items necessary to plug into the app. We then turned to Google Cloud Vision, which worked well with the app and allowed us to identify the writing on receipts.
## Accomplishments that we're proud of
We are proud to report that the app works and that a user can accurately upload information onto the app and generate recipes that correspond to the items that are about to expire the soonest. Ultimately, we worked together well throughout the weekend and are proud of the final product.
## What we learned
We learnt that integrating image processing can be harder than initially expected, but manageable. Additionally, we learned how to program an app from front to back in a manner that blends harmoniously such that the app itself is solid on the interface and in calling information.
## What's next for Cibus
There remain a lot of functionalities that can be further optimized within the app, like number of foods with corresponding expiry dates in the database. Furthermore, we would in the future like the user to be able to take a picture of a food item and have it automatically upload the information on it to the app.
|
winning
|
## Slooth
Slooth.tech was born from the combined laziness and frustration towards long to navigate school websites of four Montréal based hackers.
When faced with the task of creating a hack for McHacks 2016, the creators of Slooth found the perfect opportunity to solve a problem they faced for a long time: navigating tediously complicated school websites.
Inspired by Natural Language Processing technologies and personal assistants such as Google Now and Siri, Slooth was aimed at providing an easy and modern way to access important documents on their school websites.
The Chrome extension Slooth was built with two main features in mind: customization and ease of use.
# Customization:
Slooth is based on user recorded macros. Each user will record any actions they which to automate using the macro recorder and associate an activation phrase to it.
# Ease of use:
Slooth is intended to simplify its user's workflow. As such, it was implemented as an easily accessible Chrome extension and utilizes voice commands to lead its user to their destination.
# Implementation:
Slooth is a Chrome extension built in JS and HTML.
The speech recognition part of Slooth is based on the Nuance ASR API kindly provided to all McHacks attendees.
# Features:
-Fully customizable macros
-No background spying. Slooth's speech recognition is done completely server side and notifies the user when it is recording their speech.
-Minimal server side interaction. Slooth's data is stored entirely locally, never shared with any outside server. Thus you can be confident that your personal browsing information is not publicly available.
-Minimal UI. Slooth is designed to simplify one's life. You will never need a user guide to figure out Slooth.
# Future
While Slooth reached its set goals during McHacks 2016, it still has room to grow.
In the future, the Slooth creators hope to implement the following:
-Full compatibility with single page applications
-Fully encrypted autofill forms synched with the user's Google account for cross platform use.
-Implementation of the Nuance NLU api to add more customization options to macros (such as verbs with differing parameters).
# Thanks
Special thanks to the following companies for their help and support in providing us with resources and APIs:
-Nuance
-Google
-DotTech
|
## Inspiration
Imagine you broke your EpiPen but you need it immediately for an allergic reaction. Imagine being lost in the forest with cut wounds and bleeding from a fall but have no first aid kit. How will you take care of your health without nearby hospitals or pharmacies? Well good thing for you, we have **MediFly**!! MediFly is inspired by how emergency vehicles such as ambulances take too long to get to the person in need of aid because of other cars on the road and traffic. Every second spent waiting is risking someone's life. So in order to combat that issue, we use **drones** as the first emergency responders to send medicine to save people's lives or keep them in a stable condition before human responders arrive.
## What it does
MediFly allows the user to request for emergency help or medication such as an Epipen and Epinephrine. First you download the MediFly app and create a personal account. Then you can log into your account and use the features when necessary. If you are in an emergency, press the "EMERGENCY" button and a list of common medication options will appear for the person to pick from. There is also an option to search for your needed medication. Once a choice is selected, the local hospital will see the request and send a drone to deliver the medication to the person. Human first responders will also be called. The drone will have a GPS tracker and a GPS location of the person it needs to send the medication to. When the drone is within close distance to the person, a message is sent to tell them to go outside to where the drone can see the person. The camera will use facial recognition to confirm the person is indeed the registered user who ordered the medication. This level of security is important to ensure that the medication is delivered to the correct person. When the person is confirmed, the medication holding compartment lid is opened so the person can take their medication.
## How we built it
On the software side, the front end of the app was made with react coded in Javascript, and the back end was made with Django in Python.
The text messages work through Twilio. Twilio is used to tell the user that the drone is nearby with the medication ready to hand over. It sends a message telling the person to go outdoors where the drone will be able to find the user.
On the hardware side, there are many different components that make up the drone. There are four motors, four propeller blades, a electronic speed controller, a flight controller, and 3D printed parts such as the camera mount, medication box holder, and some components of the drone frame. Besides this there is also a Raspberry Pi SBC attached to the drone for controlling the on-board systems such as the door to unload the cargo bay and stream the video to a server to process for the face recognition algorithm.
## Challenges we ran into
Building the drone from scratch was a lot harder than we anticipated. There was a lot of setting up that needed to be done for the hardware and the building aspect was not easy. It consisted of a lot of taking apart, rebuilding, soldering, cutting, hot gluing, and rebuilding.
Some of the video streaming systems did not work well at first, due to the CORS blocking the requests, given that we were using two different computers to run two different servers.
Traditional geolocation techniques often take too long - as such, we needed to build a scheme to cache a user's location before they decided to send a request to prevent lag. Additionally, the number of pages required to build, stylize, and connect together made building the site a notable challenge of scale.
## Accomplishments that we're proud of
We are extremely proud of the way the drone works and how it's able to move at quick, steady speeds while carrying the medication compartment and battery.
On the software side, we are super proud of the facial recognition code and how it's able to tell the difference between different peoples' faces. The front and back end of the website/app is also really well done. We first made the front end UI design on Figma and then implemented the design on our final website.
## What we learned
For software we learned how to use React, as well as various user authorization and authentication techniques. We also learned how to use Django.
We learnt how to build an accurate, efficient and resilient face detection recognition and tracking system to make sure the package is always delivered to the correct person.
We experimented with and learned various ways to stream real-time video over a network, also over longer ranges for the drone.
For hardware we learned how to set up and construct a drone from scratch!
## What's next for MediFly
In the future we hope to add a GPS tracker to the drone so that the person who orders the medication can see where the drone is on its path.
We would also add Twilio text messages so that when the drone is within a close radius to the user, it will send a message notifying the person to go outside and wait for the drone to deliver the medication.
|
## Inspiration
Imagine a world where learning is as easy as having a conversation with a friend. Picture a tool that unlocks the treasure trove of educational content on YouTube, making it accessible to everyone, regardless of their background or expertise. This is exactly what our hackathon project brings to life.
* Current massive online courses are great resources to bridge the gap in educational inequality.
* Frustration and loss of motivation with the lengthy and tedious search for that 60-second content.
* Provide support to our students to unlock their potential.
## What it does
Think of our platform as your very own favorite personal tutor. Whenever a question arises during your video journey, don't hesitate to hit pause and ask away. Our chatbot is here to assist you, offering answers in plain, easy-to-understand language. Moreover, it can point you to external resources and suggest specific parts of the video for a quick review, along with relevant sections of the accompanying text. So, explore your curiosity with confidence – we've got your back!
* Analyze the entire video content 🤖 Learn with organized structure and high accuracy
* Generate concise, easy-to-follow conversations⏱️Say goodbye to wasted hours watching long videos
* Generate interactive quizzes and personalized questions 📚 Engaging and thought-provoking
* Summarize key takeaways, explanations, and discussions tailored to you 💡 Provides tailored support
* Accessible to anyone with an internet 🌐 Accessible and Convenient
## How we built it
Vite React,js as front-end and Flask as back-end. Using Cohere command-nightly AI and Similarity ranking.
## Challenges we ran into
* **Increased application efficiency by 98%:** Reduced the number of API calls lowering load time from 8.5 minutes to under 10 seconds. The challenge we ran into was not taking into account the time taken for every API call. Originally, our backend made over 500 calls to Cohere's API to embed text every time a transcript section was initiated and repeated when a new prompt was made -- API call took about one second and added 8.5 minutes in total. By reducing the number of API calls and using efficient practices we reduced time to under 10 seconds.
* **Handling over 5000-word single prompts:** Scraping longer YouTube transcripts efficiently was complex. We solved it by integrating YouTube APIs and third-party dependencies, enhancing speed and reliability. Also uploading multi-prompt conversation with large initial prompts to MongoDB were challenging. We optimized data transfer, maintaining a smooth user experience.
## Accomplishments that we're proud of
Created a practical full-stack application that I will use on my own time.
## What we learned
* **Front end:** State management with React, third-party dependencies, UI design.
* **Integration:** Scalable and efficient API calls.
* **Back end:** MongoDB, Langchain, Flask server, error handling, optimizing time complexity and using Cohere AI.
## What's next for ChicSplain
We envision ChicSplain to be more than just an AI-powered YouTube chatbot, we envision it to be a mentor, teacher, and guardian that will be no different in functionality and interaction from real-life educators and guidance but for anyone, anytime and anywhere.
|
winning
|
# TravelX
## 💡 Inspiration
Whenever I plan a trip with my friends, it becomes very hard for me to keep track of my budget during the trip, and handling various bills is very hard.
## 💻 What it does
TeavelX is a money-splitting web app where you can keep track of your budgets and calculate how much money you have to give or take from your friends. You can send a friend request to your friends and chat with them, you can also post a blog. This web app can be very useful during trips and other occasions where calculation of budget and who pays for what is necessary.
## 🔨 How we built it
* Forntend: HTML, CSS
* Backend: Django
* Database: CockroachDB
* Bill Amount Extraction - Azure Form Recognizer
* Bill Payments: COIL
## Use of CockroachDB
* We have used CockroachDB as a primary database because it is an easy-to-use, open-source and indestructible SQL database.
## 🔑 Auth0
* We have used Auth0 for secure user authentication.
## Most Creative Use of Twilio:
* We are using Twilio for sending SMS messages to friends and chatrooms.
## 🧠 Challenges we ran into
* Completing the project in 24 hours.
## 🏅 Accomplishments that we're proud of
* We are happy that we completed the project in this short frame of time and we learned a lot from this hackathon
## 📖 What we learned
## 🚀 What's next for TravelX
* Adding more languages
* Information of nearby place(eg. hotel, restaurant)
|
## Inspiration
As high school seniors on the cusp of a new academic journey, we realized the looming academic expenses that are associated with attending college. In order to simplify navigating the maze of college finances we created **‘brokemenot.us’**, a website for college students to explore the financial space. We envisioned this tool to alleviate these financial pressures by providing budgeting tools, financial literacy courses/blogs, bank account management, a budgeting system as well as a student loan management/acquisition system.
## What it does
**‘brokemenot.us’** is a web application that provides college students with a way to manage their finances simply. The application allows students to connect their Capital One account, or create a new one entirely, through the Nessie API. Then through the account information available students are able to view their balance, transactions, and bills. This along with our budgeting/financial management system allows students to easily track their expenditures and see if they stay within their determined budget. Additionally, the app includes coursework in the form of articles and blogs to increase the student’s financial literacy. Finally, there is a student loan finder for the students based on their financial information. While Capital One accounts have an in-built loan option, students also have access to other financial aid methods.
## How we built it
For the frontend, we used Taipy, a framework which allowed us to build a website in Python. The framework allowed us to build a very elegant and user-friendly interface. Thanks to the aforementioned framework, we built the backend using Python too. This allowed us to incorporate the powerful APIs of Twilio and Capital One! We were able to embed products from the Google Suite to enhance user experience. We also were able to use a GoDaddy domain to make our site easier to access across the globe.
## Challenges we ran into
We had a great deal of difficulty getting accustomed to the Taipy web framework. Taipy allows for robust full stack web development solely in Python, simplifying the development process. However, us being used to traditional development using HTML, CSS, JS, and frameworks such as React, we found it difficult to adopt this new style of working. Firstly, we had to make the decision of using Taipy’s own Markdown for the UI or HTML. Being experienced with HTML, we went with that, but we quickly found that the documentation didn’t include all we needed it to. Furthermore, Taipy, being a framework on the rise, there wasn’t a large community of developers to turn to with questions. However, after much effort, we were able to effectively use the Taipy framework, and we thought it was a great way to have all of our code in one, organized place. We are proud of ourselves for learning a new skill, and we see ourselves using Taipy for future endeavors.
## Accomplishments that we're proud of
We are really proud of what we have accomplished during this hackathon. Firstly, we are proud that we managed to utilize Taipy, a new and unique framework that none of us have ever heard of before. We are proud that we were able to incorporate many additional features such as Capital One and Twilio API! While we have a lot of work to do if we want to perfect our project, we are proud of what we have completed within the given timeframe. And most importantly, we enjoyed every minute of our time on the UPenn campus.
## What we learned
Through the past 36 hours, we’ve used several technologies that we previously have not used. We utilized Twilio, Capital One Nessie API, GoDaddy, Taipei as well as GoDaddy. To use these new technologies we had to learn new skills. Solely through the documentation for each, we navigated through its uses and features to produce a complex application.
## What's next for BrokeMeNot
Our goal is to make it even more user-friendly for future college-going students. We would like to move away from the embedded Google Suite products to make our site more convenient and interactive. One of our goals going into the website was to incorporate an AI Chatbot of sorts in our website to help guide students searching for resources or options. Additionally, we wish to include AI-generated personal recommendations for the user based on their financial situation. While we did not have enough time to implement it, we would like to implement it in the future.
|
## Inspiration
scribbl.io, Stable Diffusion
## What it does
It is game-like scribbl.io, except it has players create an image with a provided prompt that is processed by Stable Diffusion's Image-to-Image processing.
## How we built it
We are using: Figma, HTML, CSS, and JavaScript for our frontend, Flask for our backend, MongoDB for the database, and Amazon S3 for storage, but attempted to use RunPOD for cloud computing
## Challenges we ran into
Unspecified stable diffusion API
We did not check if Digital Ocean
Then we ran into AWS issues as we considered creating our own Docker image, but different implementations depend on running stable diffusion without an iPython notebook.
Web Scraping via Selenium for a makeshift deployment
RunPOD didn't have an API, so we had to try and use SSH or bash commands
## Accomplishments that we're proud of
We have a decent back-end to build off of
The concept is extensible and works for many different games
Can easily add different technologies
## What we learned
Understanding the stack and road to deployment more
Communication during development (especially between front-end and back-end) could be an improvement
We could have had better prioritization
Perhaps too ambitious for a Hackathon; could have had more modest base goals and then incrementally add stretch goals
## What's next for Scrbbl.ai
We plan to develop the web app to completion
Expanding AWS implementation from just S3
Reach out to the community to learn how to leverage stable diffusion APIs
Potential monetization in the same vein as other web-based games such as Geoguesser
|
partial
|
## Inspiration
We initially wanted to create an Alexa skill using existing Standard Library APIs but soon came across multiple "dead" libraries which were unusable as they predate the restructuring of the companies architecture.
## What it does
We created various APIs using Standard Library and it's vast array of APIs to retrieve all "dead" APIs, find out how many private and how many unfinished projects there are on the database.
## How we built it
With Standard Library's online API development tool along with Node.js, powering the back end, we created a website which outputs each user and a URL to their library.
## Challenges we ran into
Utilizing Standard Libray's proprietary search API to parse through their database and finding quantifiable commonalities among each library proved to be difficult. Additionally, creating an API which could easily be called from a webpage for the first time was also challenging.
## Accomplishments that we're proud of
Our project can apply to various companies who wish to clean up their databases as well as find and data leaks that shouldn't be available easily to the public.
## What we learned
Creating our first API we learned how exactly they worked as well as how we could parse through large databases to extract desired data.
## What's next for Parse It
Expanding out to other applications and companies to help them further optimize and secure their data and servers.
|
## Inspiration
When we began to learn about the AssemblyAI API, the features and the impressively extensive abilities of this API had us wanting to do something with audio transcription for our project.
## What it does
Our project is a web app that takes in a URL link to a youtube video, and generates a text transcription of the english spoken words in the audio of that youtube video. We have an additional functionality that can summarize the text transcription, keeping the most important points of the transcribed text.
## How we built it
The backend functionality is all done in python using AssemblyAPI, the process is as follows:
* A youtube link is sent to the script via the website
* The corresponding youtube video's audio track is fetched
* The audio track is analyzed and transcribed using AssemblyAPI
* The transcribed text is outputted
And for the summarization functionality:
* A large string is inputted to the summary script
* The script uses the nltk librairy to help generate a summary of the inputted text
* The summary is outputted
The website is developed using NodeJS with a ExpressJS framework. We developed interaction functionality so that the user could input a youtube link to the website and it would communicate with our backend scripts to achieve the results we wanted.
## Challenges we ran into
Learning AssemblyAPI was fun, but figuring out how to call and specify exactly what we wanted turned out to be a challenge in and of itself.
Another challenge we ran into was around halfway through our development process, we had a lot of scripts that all did different things, and we had to figure out how to best link them together to end up at our desired functionality.
Making the website function was a huge task, and mostly taken upon by one of our team members with the most experience in the area.
## Accomplishments that we're proud of
* The overall look and design of our webpage
* The way our backend scripts work together and work with the AssemblyAI API
* The website's functionality
* Figuring out that one thing that was wrong with the configuration on one of our team member's computers that wasted ~3-4 hours.
## What we learned
* How to use the AssemblyAI API
* How to code backend scripts that can be used by a website for the frontend
* Building our teamwork skills
## What's next for RecapHacks
* Tweaking the website functionality to be more functional and streamlined
|
## Inspiration
We're computer science students, need we say more?
## What it does
"Single or Nah" takes in the name of a friend and predicts if they are in a relationship, saving you much time (and face) in asking around. We pull relevant Instagram data including posts, captions, and comments to drive our Azure-powered analysis. Posts are analyzed for genders, ages, emotions, and smiles -- with each aspect contributing to the final score. Captions and comments are analyzed for their sentiment, which give insights into one's relationship status. Our final product is a hosted web-app that takes in a friend's Instagram handle and generate a percentage denoting how likely they are to be in a relationship.
## How we built it
Our first problem was obtaining Instagram data. The tool we use is a significantly improved version of an open-source Instagram scraper API (<https://github.com/rarcega/instagram-scraper>). The tool originally ran as a Python command line argument, which was impractical to use in a WebApp. We modernized the tool, giving us increased flexibility and allowing us to use it within a Python application.
We run Microsoft's Face-API on the target friend's profile picture to guess their gender and age -- this will be the age range we are interested in. Then, we run through their most recent posts, using Face-API to capture genders, ages, emotions, and smiles of people in those posts to finally derive a sub-score that will factor into the final result. We guess that the more happy and more pictures with the opposite gender, you'd be less likely to be single!
We take a similar approach to captions and comments. First, we used Google's Word2vec to generate semantically similar words to certain keywords (love, boyfriend, girlfriend, relationship, etc.) as well as assign weights to those words. Furthermore, we included Emojis (is usually a good giveaway!) into our weighting scheme[link](https://gist.github.com/chrisfischer/144191eae03e64dc9494a2967241673a). We use Microsoft's Text Analytics API on this keywords-weight scheme to obtain a sentiment sub-score and a keyword sub-score.
Once we have these sub-scores, we aggregate them into a final percentage, denoting how likely your friend is single. It was time to take it live. We integrated all the individual calculations and aggregations into a Django., then hosted all necessary computation using Azure WebApps. Finally, we designed a simple interface to allow inputs as well as to display results with a combination of HTML, CSS, JavaScript, and JQuery.
## Challenges we ran into
The main challenge was that we were limited by our resources. We only had access to basic accounts for some of the software we used, so we had to be careful how on often and how intensely we used tools to prevent exhausting our subscriptions. For example, we limited the number of posts we analyzed per person. Also, our Azure server uses the most basic service, meaning it does not have enough computing power to host more than a few clients.
The application only works on "public" Instagram ideas, so we were unable to find a good number of test subjects to fine tune our process. For the accounts we did have access to, the application produced a reasonable answer, leading us to believe that the app is a good predictor.
## Accomplishments that we're proud of
We proud that we were able to build this WebApp using tools and APIs that we haven't used before. In the end, our project worked reasonably well and accurately. We were able to try it on people and get a score which is an accomplishment in that. Finally, we're proud that we were able to create a relevant tool in today's age of social media -- I mean I know I would use this app to narrow down who to DM.
## What we learned
We learned about the Microsoft Azure API (Face API, Text Analytics API, and web hosting), NLP techniques, and full stack web development. We also learned a lot of useful software development techniques such as how to better use git to handle problems, creating virtual environments, as well as setting milestones to meet.
## What's next for Single or Nah
The next steps for Single or Nah is to make the website and computations more scalable. More scalability allows more people to use our product to find who they should DM -- and who doesn't want that?? We also want to work on accuracy, either by adjusting weights given more data to learn from or by using full-fledged Machine Learning. Hopefully more accuracy would save "Single or Nah" from some awkward moments... like asking someone out... who isn't single...
|
losing
|
## Inspiration
In a lot of mass shootings, there is a significant delay from the time at which police arrive at the scene, and the time at which the police engage the shooter. They often have difficulty determining the number of shooters and their location. ViGCam fixes this problem.
## What it does
ViGCam spots and tracks weapons as they move through buildings. It uses existing camera infrastructure, location tags and Google Vision to recognize weapons. The information is displayed on an app which alerts users to threat location.
Our system could also be used to identify wounded people after an emergency incident, such as an earthquake.
## How we built it
We used Raspberry Pi and Pi Cameras to simulate an existing camera infrastructure. Each individual Pi runs a Python script where all images taken from the cameras are then sent to our Django server. Then, the images are sent directly to Google Vision API and return a list of classifications. All the data collected from the Raspberry Pis can be visualized on our React app.
## Challenges we ran into
SSH connection does not work on the HackMIT network and because of this, our current setup involves turning one camera on before activating the second. In a real world situation, we would be using an existing camera network, and not our raspberry pi cameras to collect video data.
We also have had a difficult time getting consistent identification of our objects as weapons. This is largely because, for obvious reasons, we cannot bring in actual weapons. Up close however, we have consistent identification of team member items.
Using our current server set up, we consistently get server overload errors. So we have an extended delay between each image send. Given time, we would implement an actual camera network, and also modify our system so that it would perform object recognition on videos as opposed to basic pictures. This would improve our accuracy. Web sockets can be used to display the data collected in real time.
## Accomplishments that we’re proud of
1) It works!!! (We successfully completed our project in 24 hours.)
2) We learned to use Google Cloud API.
3) We also learned how to use raspberry pi. Prior to this, none on our team had any hardware experience.
## What we learned
1) We learned about coding in a real world environment
2) We learned about working on a team.
## What's next for ViGCam
We are planning on working through our kinks and adding video analysis. We could add sound detection for gunshots to detect emergent situations more accurately. We could also use more machine learning models to predict where the threat is going and distinguish between threats and police officers. The system can be made more robust by causing the app to update in real time. Finally, we would add the ability to use law enforcement emergency alert infrastructure to alert people in the area of shooter location in real time. If we are successful in these aspects, we are hoping to either start a company, or sell our idea.
|
## Inspiration
There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading).
The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts.
## What it does
The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server.
## How we built it
The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript.
## Challenges we ran into
* TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance
* It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device
## Accomplishments that we're proud of
* Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application
* Design of the glasses prototype
## What we learned
* How to setup a back-end web server using Flask
* How to facilitate socket communication between Flask and React
* How to setup a web server through local host tunneling using ngrok
* How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks
* How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end
## What's next for Synviz
* With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
|
## Inspiration
snore or get pourd on yo pores
Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us
## What it does
It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go.
## How we built it
We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face.
## Challenges we ran into
Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it.
## Accomplishments that we're proud of
Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects.
## What we learned
We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again.
## What's next for You snooze you lose. We dont lose
Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out
|
winning
|
## Inspiration
We really just wanted to see how badly we could misuse cockroachdb.
## What it does
Our front end interface allows the user to fight against a cockroachdb cluster, killing off instances.... **YOU MONSTER**
## How we built it
Javascript in the front, Node in the back, and BASH in the trunk!
## Challenges we ran into
Mac OSX, undocumented API's, RAM.. or lack thereof, cockroachdb not being designed to run a couple hundred instances at once.... just generally doing everything you're not supposed to use cockroachdb for.
## Accomplishments that we're proud of
It works. JS->Node->Bash and back, AND IT WORKS! Also the game mechanics are actually pretty awesome (and hard!)
## What we learned
Cockroachdb has 0 documentation for their REST api.... it also does NOT like running up 100+ instances at once
## What's next for DESTROY ALL ROACHES!!
We'd like to distribute it across multiple servers around the world, giving players in certain parts of the world closest to those servers an advantage, to encourage international play.
|
## Inspiration
During the pandemic, many brick-and-mortar stores are short-staffed. This not only affects how efficiently the store runs, but also how efficiently customers navigate through aisles searching for products.
Our story stems from a personal experience of not being able to find an item at a local Canadian Tire store. When looking for somebody to help, we realized that there were no available associates at the front. From this experience, we were determined to develop a contactless solution that would help customers in navigating through a store, while also helping businesses that are short-staffed.
## What it does
WhereWare virtually navigates customers to find items they are looking for in a store. Upon entering a store, customers simply scan the QR code at the front. The QR code will open up the specific store’s page on WhereWare, and customers can search for items they want to buy. The application then provides the location and visual map to help navigate through the store.
## How we built it
WhereWare’s interface was built using React to ensure an in-browser responsive navigation experience. The front-end communicates with the Flask backend in order to access the inventory locations that are stored in a CockroachDB database.
## Challenges we ran into
We each faced a lot of firsts during this hackathon, first react app, first time styling in CSS, first time using an SQL DBMS; therefore, with our limited knowledge in building a functional and visually pleasing web application, we struggled with small things like getting a button to be the right colour.
The biggest challenge we faced overall was implementing cockroachDB since we had never worked with a specific database management system outside of excel spreadsheets. It was a big learning curve, but a fun time problem solving along the way.
## Accomplishments that we're proud of
* Submitting my first project at a hackathon - Rowena
* Learning how to create and interact with SQL databases
* First hack with a somewhat functional UI :)
* Learning how react and javascript works and ending up having a decent UI
## What we learned
* How to use CockroachDB
* How to use media query in CSS to separate the webpage design for iphone and desktop browser
* Projects can be fun when you’re working with an awesome group!
## What's next for WhereWare
* Version 1.1 (October 2021 Release): GPS integration to allow users to follow real-time directions to the item they are looking for. This release will allow businesses to create an AR map of the store, simply by turning on their camera and walking through each aisle.
* Version 1.2 (December 2021 Release): An auditory cue component will make WhereWare accessible to those who are visually impaired.
|
This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data>
|
losing
|
## Inspiration
Disasters can strike quickly and without notice. Most people are unprepared for situations such as earthquakes which occur with alarming frequency along the Pacific Rim. When wifi and cell service are unavailable, medical aid, food, water, and shelter struggle to be shared as the community can only communicate and connect in person.
## What it does
In disaster situations, Rebuild allows users to share and receive information about nearby resources and dangers by placing icons on a map. Rebuild uses a mesh network to automatically transfer data between nearby devices, ensuring that users have the most recent information in their area. What makes Rebuild a unique and effective app is that it does not require WIFI to share and receive data.
## How we built it
We built it with Android and the Nearby Connections API, a built-in Android library which manages the
## Challenges we ran into
The main challenges we faced while making this project were updating the device location so that the markers are placed accurately, and establishing a reliable mesh-network connection between the app users. While these features still aren't perfect, after a long night we managed to reach something we are satisfied with.
## Accomplishments that we're proud of
WORKING MESH NETWORK! (If you heard the scream of joy last night I apologize.)
## What we learned
## What's next for Rebuild
|
## Inspiration
Between 1994 and 2013 there were 6,873 natural disasters worldwide, which claimed 1.35 million lives or almost 68,000 lives on average each year. In many of these disasters, people don't know where to go, and where is safe. For example, during Hurricane Harvey, many people learned about dangers such as alligators or flooding through other other people, and through social media.
## What it does
* Post a natural disaster hazard in your area
* Crowd-sourced hazards
* Pulls government severe weather data
* IoT sensor system to take atmospheric measurements and display on map
* Twitter social media feed of trending natural disasters in the area
* Machine learning image processing to analyze posted images of natural disaster hazards
Our app **Eye in the Sky** allows users to post about various natural disaster hazards at their location, upload a photo, and share a description of the hazard. These points are crowd-sourced and displayed on a map, which also pulls live data from the web on natural disasters. It also allows these users to view a Twitter feed of current trending information on the natural disaster in their location. The last feature is the IoT sensor system, which take measurements about the environment in real time and displays them on the map.
## How I built it
We built this app using Android Studio. Our user-created posts are hosted on a Parse database (run with mongodb). We pull data from the National Oceanic and Atmospheric Administration severe weather data inventory. We used Particle Electron to collect atmospheric sensor data, and used AWS to store this data in a JSON.
## Challenges I ran into
We had some issues setting up an AWS instance to run our image processing on the backend, but other than that, decently smooth sailing.
## Accomplishments that I'm proud of
We were able to work really well together, using everyone's strengths as well as combining the pieces into a cohesive app that we're really proud of. Everyone's piece was integrated well into the app, and I think we demonstrated good software collaboration.
## What I learned
We learned how to effectively combine our code across various languages and databases, which is a big challenge in software as a whole. We learned Android Development, how to set up AWS, and how to create databases to store custom data. Most importantly, we learned to have fun!
## What's next for Eye in the Sky
In the future, we would like to add verification of people's self-reported hazards (through machine learning and through up-voting and down-voting)
We would also like to improve the efficiency of our app and reduce reliance on network because there might not be network, or very poor network, in a natural disaster
We would like to add more datasets from online, and to display storms with bigger bubbles, rather than just points
We would also like to attach our sensor to drones or automobiles to gather more weather data points to display on our map
|
## Inspiration
The whole ideas was to not only a device a task management system for maintainence of Fannie Mae's properties but to exercise creativity device an interface that will help Fannie Mae make loss minimizing decisions.
## What it does
I learned from Fannie Mae's engineers that one of their biggest problem was to maintain the properties and prevent theft of 'copper pipes' and 'electrical appliances' from the property. FannieBae Maps Fannie Mae's properties and classifies them according to thefts and statuses. This helps predict future thefts as planned robbers follow a pattern tend to commit a crime nearby.
Whereas, it also helps give hints on the robbers.
If a recently vacated property was robbed/vandalized, it strongly hints that the defaulter might be behind it.
## How I built it
Back-end: Node.Js
Data: JSON
Front End: Javascript, HTML, CSS.
## Challenges I ran into
Getting the back-end in sync with the front end. I had zero past experience with Node.js and it works completely opposite to technologies I was used to working with.
## Accomplishments that I'm proud of
Learned Node.Js in 24 hours :)
## What I learned
Node.Js, Command Line, Solving a problem from top to bottom i.e. Product Design to Development.
## What's next for FannieBae
Include Twilio API that will directly help the Desk Analyst communicate with the onsite Inspector through internet.
Include Clarifai API that will parse through the images and help predict upcoming damages to the property such as Mould on walls, termite infection etc.
|
winning
|
## Inspiration
Biking everyday and reduce carbon emissions by **85%**. However, as students, we face many hurdles with using sustainable transportation. There are several roadblocks associated with biking, especially the cost of purchasing and maintaining a bike.
There are cities, such as Toronto, which implement solutions to such a problem. They have a bicycle-sharing system with over 6850 government owned bicycles and 625 stations. However, implementing such a system in smaller cities such as Waterloo would require an immense amount of time, infrastructure and resources which would be unfeasible.
This motivated our team to build ReCycle, an app which connects interested bike owners to riders who want to rent a bike.
## What it does
ReCycle is an app which connects interested bike owners to riders who want to rent a bike. Bike owners can list their bike for rent for a desired time limit and hourly rate whereas riders can view the listings near them to find the bike that best suits their needs.
Our app is a win-win for both owners and riders, allowing owners to generate over $750 a month and providing riders with access to more convenient, readily accessible transportation.
## How we built it
For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. We used the Google Maps API to generate a map with markers, directions, and other functionality.
We built the backend using the Flask framework.
The backend was built using the Flask framework. We used a cockroachdb database to store and access user-specific and bike-specific information.
The product also comes with a hardware component to alert cyclers when they need to return a bike. It was built using Arduino Uno and Active Buzzer.
## Challenges we ran into
None of us had ever used the Google Maps API before, especially not with React. We ran into a number of dependency conflicts and other issues which took a considerable amount of time to debug. We also had never worked with CockroachDB before
## Accomplishments that we're proud of
We're proud of learning about new technologies from Hack the North's sponsors such as CockroachDB. We’re also proud of being able to integrate multiple components of a large project including the frontend, database, and hardware.
## What's next for ReCycle
We plan to expand our app to support mobile platforms as well. We hope to support our users further by giving information about bike trails and allowing in-app transactions.
|
## Inspiration
The world is experiencing one of the largest climate shifts in its history. We noticed that many people wanted to be more environmentally friendly, but did not know where to start. We also noticed that many people did not know where to dispose of their items. Most items end up in the wrong bin, which has huge consequences on waste stream treatment. To solve this problem, we wanted to create an app that would tell users where their trash should go. Overall, we hope that this app will encourage more environmentally sustainable practices that will benefit municipal waste management systems, saving billions.
## What it does
Re:Cycle is an app that utilizes AI to categorize waste products. Users can choose between taking a photo with the in-app camera, choosing an image from their gallery, or searching on the app database for a specific item or product. The app will then return with a suggestion for where to dispose of the product.
## How we built it
Our team was divided into two groups: front-end and back-end developers. Front-end developers started with wire-framing and prototyping using Adobe XD, then implemented using Flutter. Back-end developers utilized Google Cloud’s Vision API for image identification/classification and NodeJS for image server.
## Challenges we ran into
Initially, our biggest problem was getting started. We had a great idea and many proposed solutions, but we were limited by the capabilities we had as a group of making any feasible app. A large chunk of time was dedicated to training Google Cloud’s vision API to return accurate identifiers for given images. Another issue we struggled with was the fact that no one was familiar with either Flutter as a framework or Dart as a language.
## Accomplishments that we're proud of
We are proud of how far we got with Flutter. It was something that none of us were even remotely familiar with, but we eventually got something more than the yellow and black error screen. We are also happy that we were able to implement Google’s Vision API. The model was trained with over 2,500 images to classify the different waste items.
## What we learned
We learned that having a good team and a great idea makes it easy to have a structured plan. For some of us, this was our first hackathon. For others, this was one of the first times we ever tried creating an app. The process has opened our eyes to the complexities that arise in app development.
## What's next for Re:Cycle
We need to continue building out our user interface—that includes having a built-in database so that users can search for items in case of AI identification failure. We also need to train the AI to identify more than just a single object in an image. Finally, having thorough user recommendations for item disposal is necessary, particularly for common, difficult to recycle items like makeup or electronics. We are open to feedback as well! Please comment below any suggestions for improvement.
|
## Inspiration
In the theme of sustainability, we noticed that a lot of people don't know what's recyclable. Some people recycle what shouldn't be and many people recycle much less than they could be. We wanted to find a way to improve recycling habits while also incentivizing people to recycle more. Cyke, pronounced psych,(psyched about recycling) was the result.
## What it does
Psych is a platform to get users in touch with local recycling facilities, to give recycling facilities more publicity, and to reward users for their good actions.
**For the user:** When a user creates an account for Cyke, their location is used to tell them what materials are able to be recycled and what materials aren't. Users are given a Cyke Card which has their rank. When a user recycles, the amount they recycled is measured and reported to Cyke, which stores that data in our CochroachDB database. Then, based on revenue share from recycling plants, users would be monetarily rewarded. The higher the person's rank, the more they receive for what they recycle. There are four ranks, ranging from "Learning" to "Superstar."
\**For Recycling Companies: \** For a recycling company to be listed on our website, they must agree to a revenue share corresponding to the amount of material recycled (can be discussed). This would be in return for guiding customers towards them and increasing traffic and recycling quality. Cyke provides companies with an overview of how well recycling is going: statistics over the past month or more, top individual contributors to their recycling plant, and an impact score relating to how much social good they've done by distributing money to users and charities. Individual staff members can also be invited to the Cyke page to view these statistics and other more detailed information.
## How we built it
Our site uses a **Node.JS** back-end, with **ejs** for the server-side rendering of pages. The backend connects to **CockroachDB** to store user and company information, recycling transactions, and a list of charities and how much has been donated to each.
## Challenges we ran into
We ran into challenges mostly with CockroachDB, one of us was able to successfully create a cluster and connect it via the MacOS terminal, however when it came to connecting it to our front-end there is existed a lot of issues with getting the right packages for the linux CLI as well as for connecting via our connection string. We spent quite a few hours on this as using CockroachDB serverless was an essential part of hosting info about our recyclers, recycling companies, transactions, and charities.
## Accomplishments that we're proud of
We’re proud of getting CockroachDB to function properly. For two of the three members on the team this was our first time using a Node.js back-end, so it was difficult and rewarding to complete. On top of being proud of getting our SQL database off the ground, we’re proud of our design. We worked a lot on the colors. We are also proud of using the serverless form of CockroachDB so our compute cluster is hosted google's cloud platform (GCP).
## What we've learned
Through some of our greatest challenges came some of our greatest learning advances. Through toiling through the CockroachDB and SQL table, of which none of us had previous experience with before, we learned a lot about environment variables and how to use express and pg driver to connect front-end and back-end elements.
## What's next for Cyke
To scale our solution, the next steps involve increasing personalization aspects of our application. For users that means, adding in capabilities that highlight local charities for users to donate to, and locale based recycling information. On the company side, there are optimizations that can be made around the information that we provide them, thus improving the impact score to consider more factors like how consistent their users are.
|
losing
|
This app helps you decide what you want based off the way you're feeling that day. Just tap on the mood and see where it takes you ! The source code is only 3kb!
|
## Inspiration
According to the American Psychological Association, one in three college freshmen worldwide suffer from mental health disorders. As freshmen, this issue is close to our hearts as we witness some of our peers struggle with adjusting to college life. We hope to help people understand how their daily activities influence their emotional welfare, and provide them with a safe space to express themselves.
## What it does
daybook is a secret weapon for students and others to be a bit happier every day. It's a **web-based journal** that gives people a space to reflect, automatically generating insights about mood trends and what things make people happiest. Our goal is to teach people how to best understand themselves and their happiness.
We don't force our users to go through awkward data entry or to be their own psychiatrist. Instead, daybook lets users write a few sentences for just a minute a day. Behind the scenes, daybook does all the work - using Google's Natural Language API to **automatically find the mood for each day and what events, people, and places in our users' life make them happiest**. daybook then gives the power back to our users with a **happiness summary** of mood, sleep, and a list of things that make them happiest, letting our users discover happy things they might not have even thought about.
## How we built it
daybook was built with Google Cloud's Natural Language API to automatically rate activities on a scale of how good the user feels when carrying them out, while extracting and categorizing them.
We used a variety of technologies for our hack, ranging from:
Frontend: Vue.js, CSS3, HTML5 hosted on Firebase (also using Firebase Auth)
Backend: Flask (Python, SQLite3) app running on Google Compute Engine servers
## Challenges we ran into
* Coming up with a meaningful and viable idea
* Determining which platform to use to store a database
* How to get the entities we wanted from Google's Natural Language API
* Setting up all of our servers - we host our landing page, main site, API, and CDN in different places
## Accomplishments that we're proud of
* Templates in Vue
* Using Google's Natural Language Processing libraries
* Developing a fully functional web application that is responsive, even for mobile
* Rolling our own CSS framework - minimalism was key
* Extensive use of Google Cloud - Firebase, Compute Engine, NLP
* Git best practices - all feature changes were made on separate branches and pull requested
* It's the first hackathon for 3 of us!
* DARK MODE WORKS
* Of course, our cool domain name: daybook.space, my.daybook.space
## What we learned
A whole lot of stuff. Three of us came in completely new to hackathons, so daybook was an opportunity to learn:
Creating databases and managing data using Google's Cloud Firestore; sentiment analysis using Google's Natural Language API; writing an API using Flask; handling GET and POSTS requests to facilitate communication between our web application and database.
## What's next for daybook
* iOS/Android App with notifications!
* Better authentication options
* More detailed analysis
* Getting our first users - ourselves
|
## Inspiration
Have you ever wanted to listen to music based on how you’re feeling? Now, all you need to do is message MoodyBot a picture of yourself or text your mood, and you can listen to the Spotify playlist MoodyBot provides. Whether you’re feeling sad, happy, or frustrated, MoodyBot can help you find music that suits your mood!
## What it does
MoodyBot is a Cisco Spark Bot linked with Microsoft’s Emotion API and Spotify’s Web API that can detect your mood from a picture or a text. All you have to do is click the Spotify playlist link that MoodyBot sends back.
## How we built it
Using Cisco Spark, we created a chatbot that takes in portraits and gives the user an optimal playlist based on his or her mood. The chatbot itself was implemented on built.io which controls feeding image data through Microsoft’s Emotion API. Microsoft’s API outputs into a small Node.js server in order to compensate for the limited features of built.io. like it’s limitations when importing modules. From the external server we use moods classified by Microsoft’s API to select a Spotify playlist using Spotify’s Web API which is then sent back to the user on Cisco Spark.
## Challenges we ran into
Spotify’s Web API requires a new access token every hour. In the end, we were not able to find a solution to this problem. Our inexperience with Node.js also led to problems with concurrency. We had problems with built.io having limited APIs that hindered our project.
## Accomplishments that we're proud of
We were able to code around the fact that built.io would not encoding our images correctly. Built.io also was not able to implement other solutions to this problem that we tried to use.
## What we learned
Sometimes, the short cut is more work, or it won't work at all. Writing the code ourselves solved all the problems we were having with built.io.
## What's next for MoodyBot
MoodyBot has the potential to have its own app and automatically open the Spotify playlist it suggests. It could also connect over bluetooth to a speaker.
|
losing
|
## Inspiration
The inspiration for this project was a group-wide understanding that trying to scroll through a feed while your hands are dirty or in use is near impossible. We wanted to create a computer program to allow us to scroll through windows without coming into contact with the computer, for eating, chores, or any other time when you do not want to touch your computer. This idea evolved into moving the cursor around the screen and interacting with a computer window hands-free, making boring tasks, such as chores, more interesting and fun.
## What it does
HandsFree allows users to control their computer without touching it. By tilting their head, moving their nose, or opening their mouth, the user can control scrolling, clicking, and cursor movement. This allows users to use their device while doing other things with their hands, such as doing chores around the house. Because HandsFree gives users complete **touchless** control, they’re able to scroll through social media, like posts, and do other tasks on their device, even when their hands are full.
## How we built it
We used a DLib face feature tracking model to compare some parts of the face with others when the face moves around.
To determine whether the user was staring at the screen, we compared the distance from the edge of the left eye and the left edge of the face to the edge of the right eye and the right edge of the face. We noticed that one of the distances was noticeably bigger than the other when the user has a tilted head. Once the distance of one side was larger by a certain amount, the scroll feature was disabled, and the user would get a message saying "not looking at camera."
To determine which way and when to scroll the page, we compared the left edge of the face with the face's right edge. When the right edge was significantly higher than the left edge, then the page would scroll up. When the left edge was significantly higher than the right edge, the page would scroll down. If both edges had around the same Y coordinate, the page wouldn't scroll at all.
To determine the cursor movement, we tracked the tip of the nose. We created an adjustable bounding box in the center of the users' face (based on the average values of the edges of the face). Whenever the nose left the box, the cursor would move at a constant speed in the nose's position relative to the center.
To determine a click, we compared the top lip Y coordinate to the bottom lip Y coordinate. Whenever they moved apart by a certain distance, a click was activated.
To reset the program, the user can look away from the camera, so the user can't track a face anymore. This will reset the cursor to the middle of the screen.
For the GUI, we used Tkinter module, an interface to the Tk GUI toolkit in python, to generate the application's front-end interface. The tutorial site was built using simple HTML & CSS.
## Challenges we ran into
We ran into several problems while working on this project. For example, we had trouble developing a system of judging whether a face has changed enough to move the cursor or scroll through the screen, calibrating the system and movements for different faces, and users not telling whether their faces were balanced. It took a lot of time looking into various mathematical relationships between the different points of someone's face. Next, to handle the calibration, we ran large numbers of tests, using different faces, distances from the screen, and the face's angle to a screen. To counter the last challenge, we added a box feature to the window displaying the user's face to visualize the distance they need to move to move the cursor. We used the calibrating tests to come up with default values for this box, but we made customizable constants so users can set their boxes according to their preferences. Users can also customize the scroll speed and mouse movement speed to their own liking.
## Accomplishments that we're proud of
We are proud that we could create a finished product and expand on our idea *more* than what we had originally planned. Additionally, this project worked much better than expected and using it felt like a super power.
## What we learned
We learned how to use facial recognition libraries in Python, how they work, and how they’re implemented. For some of us, this was our first experience with OpenCV, so it was interesting to create something new on the spot. Additionally, we learned how to use many new python libraries, and some of us learned about Python class structures.
## What's next for HandsFree
The next step is getting this software on mobile. Of course, most users use social media on their phones, so porting this over to Android and iOS is the natural next step. This would reach a much wider audience, and allow for users to use this service across many different devices. Additionally, implementing this technology as a Chrome extension would make HandsFree more widely accessible.
|
## Inspiration
Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID.
## What it does
Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy.
## How I built it
We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud.
## Challenges I ran into
One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful.
We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK.
## Accomplishments that I'm proud of
We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is.
Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times.
## What I learned
Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL.
For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend.
## What's next for Potluck
For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well.
|
## Inspiration
The inspiration behind Go Desk was to take AI chatbots to the next level for SMEs and startups. We wanted to help businesses focus on growth and innovation rather than getting bogged down by repetitive customer support tasks. By automating support calls, we aim to give businesses more time to build and scale.
## What it does
Go Desk is a phone-based customer support AI agent that allows businesses to create intelligent agents for answering customer questions and performing specific tasks. These agents go beyond simple responses—they can cancel orders, book appointments, escalate cases, and update information, without requiring human intervention.
## How we built it
Go Desk was built on Open AI's reliable APIs for conversational generation and intent comprehension. We integrated Twilio to handle phone calls programmatically, using speech-to-text for voice input processing. The backend was developed with Node.js and TypeScript, while the frontend was built with Vue.js.
## Challenges we ran into
We faced a few challenges, especially in figuring out the right idea to implement. Initially, we planned a hardware project, but due to lack of components and other issues, we decided to pivot to an AI-based solution. Without a designer on the team, we had to get creative with the UI, and while it was a bit hacky, we’re proud of the result!
## Accomplishments that we're proud of
This was our first project involving large language models (LLMs), and we pulled it off with almost no sleep in two days! We’re also proud of the fact that we managed to pivot the project successfully and deliver a fully functional AI-powered solution.
## What's next for Go Desk
We plan to iterate on the platform, refine its features, and validate the idea by testing it with real customers. Our goal is to keep improving based on feedback and make Go Desk a go-to tool for businesses needing advanced AI-powered customer support.
|
winning
|
## Inspiration
Nowadays it is essential for people to have authentication in any application. Also, people have a hard time to calculate their daily income and expenditure. An application without an authentication is equivalent to a mansion without a door.
## What it does
The app is a tracker application where you can manually add your income and expenditure and have it in your pocket at anytime you need. It also comes with Auth0's authentication system which helps an user to do his/her tasks in a protective environment.
## How we built it
We have used react framework along with HTML and CSS to build the application and have used the SDK from Auth0 to complete the authentication part.
## Challenges we ran into
Integrating the authentication was a real challenge. Also, getting the react application ready is a tricky part too.
## Accomplishments that we're proud of
I am proud to complete the technology demonstration within the allocated time.
## What we learned
I have learned teamwork and time management.
## What's next for Secured Income and Expense Tracker
I will try to make the manual addition part to transform into an automation. That will help many users just by connecting with their respective banks.
|
## Why this Project?
I wanted to create an application for small businesses, something I have never done before. Moreover, I love exploring different fields within technology such as FinTech and believed this would be a good in-depth exploration.
## What it does
It connects to the QuickBooks Online API, pulls down customers' associated information, stores the information in a Firebase database, and checks against the information to validate a user's input, and upon successful verification of the user as a customer in the database, conducts risk analysis using the Pitney Bowes API.
## How I built it
I built it by using Node.js, ES6, HTML5/CSS3, QuickBooks Online API, Firebase, Pitney Bowes Identify Risk API.
## Challenges I ran into
I frequently ran into challenges while going through OAuth for both APIs and while re-structuring the JSON responses to best fit validation functions and risk analysis.
## Accomplishments that I'm proud of
I worked alone on this project so it was quite a feat to stay awake and push through the (countless) challenges.
## What I learned
I learned far too much, technically and otherwise. I learned how to integrate the two aforementioned APIs and integrated knowledge of economics to best understand how to analyze user's and how to prioritize what information to validate.
## What's next for iBizCenter
|
## Inspiration
Our inspiration for this project was our desire to travel after COVID, but we are unsure how much we will have to save per month in order to travel to the destination of our choice
## What it does
Our app allows users to budget their expenses and show how much they are spending. It also allows users to add goal items and our app calculates how much per month they will have to save in order to reach their goal by the desired timeline. The user can view these goals in the view page. It also features a user login and tracks data specific to the user's login
## How I built it
We built the whole app using React and used Firebase Auth/Firebase DB to store and authenticate our users data
## Challenges I ran into
One of the biggest challenges we ran into was reading data from the Firebase DB and displaying it.
## Accomplishments that I'm proud of
We are proud of the fact that we can present a product and that we managed to sleep at a decent time
## What I learned
We learned many things about the inner workings of React and how the React Context works. In addition we also learned how to use the firebase DB and firebase auth.
## What's next for planSmart
We plan to refine the UI of planSmart and continue to work on the inner functionality of the view page. As well, we want to add an investment API that automatically saves the users money per time period according to their savings goals and calculated annuity.
|
partial
|
## Inspiration
It may have been the last day before an important exam, the first day at your job, or the start of your ambitious journey of learning a new language, where you were frustrated at the lack of engaging programming tutorials. It was impossible to get their "basics" down, as well as stay focused due to the struggle of navigating through the different tutorials trying to find the perfect one to solve your problems.
Well, that's what led us to create Code Warriors. Code Warriors is a platform focused on encouraging the younger and older audience to learn how to code. Video games and programming are brought together to offer an engaging and fun way to learn how to code. Not only are you having fun, but you're constantly gaining new and meaningful skills!
## What it does
Code warriors provides a gaming website where you can hone your skills in all the coding languages it provides, all while levelling up your character and following the storyline! As you follow Asmodeus the Python into the jungle of Pythania to find the lost amulet, you get to develop your skills in python by solving puzzles that incorporate data types, if statements, for loops, operators, and more. Once you finish each mission/storyline, you unlock new items, characters, XP, and coins which can help buy new storylines/coding languages to learn! In conclusion, Code Warriors offers a fun time that will make you forget you were even coding in the first place!
## How we built it
We built code warriors by splitting our team into two to focus on two specific points of the project.
The first team was the UI/UX team, which was tasked with creating the design of the website in Figma. This was important as we needed a team that could make our thoughts come to life in a short time, and design them nicely to make the website aesthetically pleasing.
The second team was the frontend team, which was tasked with using react to create the final product, the website. They take what the UI/UX team has created, and add the logic and function behind it to serve as a real product. The UI/UX team shortly joined them after their initial task was completed, as their task takes less time to complete.
## Challenges we ran into
The main challenge we faced was learning how to code with React. All of us had either basic/no experience with the language, so applying it to create code warriors was difficult. The main difficulties associated with this were organizing everything correctly, setting up the react-router to link pages, as well as setting up the compiler.
## Accomplishments that we're proud of
The first accomplishment we were proud of was setting up the login page. It takes only registered usernames and passwords, and will not let you login in without them. We are also proud of the gamified look we gave the website, as it gives the impression that the user is playing a game. Lastly, we are proud of having the compiler embedded in the website as it allows for a lot more user interaction and function to the website.
## What we learned
We learnt a lot about react, node, CSS, javascript, and tailwind. A lot of the syntax was new to us, as well as the applications of a lot of formatting options, such as padding, margins, and more. We learnt how to integrate tailwind with react, and how a lot of frontend programming works.
We also learnt how to efficiently split tasks as a team. We were lucky enough to see that our initial split up of the group into two teams worked, which is why we know that we can continue to use this strategy for future competitions, projects, and more.
## What's next for Code Warriors
What's next for code warriors is to add more lessons, integrate a full story behind the game, add more animations to give more of a game feel to it, as well as expand into different coding languages! The potential for code warriors is unlimited, and we can improve almost every aspect and expand the platform to proving a multitude of learning opportunities all while having an enjoyable experience.
## Important Info for the Figma Link
**When opening the link, go into the simulation and press z to fit screen and then go full screen to experience true user interaction**
|
## Inspiration
We wanted to focus on education for children and knew the power of AI resources for both text and image generation. One of our members realized the lack of accessible picture books online and suggested that these stories could be generated instead. Our final idea was then DreamWeaver
## What it does
Our project uses a collection of machine learning resources to generate an entire children's book after being given any prompt.
## How we built it
Our website is built upon the React framework, using MaterialUI as a core design library.
## Challenges we ran into
Being able to generate relevant images that fit the theme and art style of the book was probably our biggest challenge and area to improve on. Trying to ensure a seamless flow and continuity in the generated imagery was a valuable learning experience and something we struggled with. Another difficulty we faced was working with foreign APIs, as there were many times we struggled to understand how to access certain resources.
## Accomplishments that we're proud of
Our use of machine learning, especially Cohere's LLM, is our project's most impressive aspect, as we combine different models and prompts to produce an entire book for the user.
## What we learned
Our team worked extensively with AI APIs such as Cohere and Midjourney and learned a lot about how to use their APIs. Furthermore, we learned a lot about frontend development with React, as we spent a good amount of time designing the UI/UX of our web app.
## What's next for DreamWeaver
Having more continuity between book images would be the first thing to work on for our project, as sometimes different characters can be introduced by accident. We are also looking into adding support for multiple languages.
|
## Inspiration
As university students, we all understand that one of the biggest struggles of living by yourself is cooking food. We wanted to make it easy to find and craft new recipes, while still eating healthy.
## What it does
With MyRecipePal, we make it easy for you to find new recipes to try. By filtering using your preferences, allergies, and cooking time, it's ensured that you will always find the right recipe for you. MyRecipePal allows you to eat healthier, save time, and improve your cooking skills all at the same time.
## How we built it
We used React, JavaScript, ChakraUI, and Bootstrap for the frontend. For the backend. we used CockroachDB and Express,
## Challenges we ran into
Before hacking started, one of our teammates felt sick and had to go home. Without him, a lot of our planning and ideas had to be delayed and reinvisioned. He was a big part of keeping our team together and energized throughout the night, so we were all discouraged from him leaving.
## Accomplishments that we're proud of
Instead of giving up, we took the loss of a teammate with even more determination to win. We utilized our resources, such as discord for communication, to help us continue moving forward with our project. As a result, we still managed to go beyond our expectations and make considerable progress on our project
## What we learned
Some of the skills we learned included using a new, serverless database, experimenting with a different css framework, and how branches and pull requests can improve your project. Aside from tech-related skills, we’ve learned how to be flexible and push through challenges. It’s very rare that things will go according to plan. As long as you’re flexible and determined to do what it takes to move forward, you will be successful.
## What's next for MyRecipePal
In the future, we hope to add:
* Accounts/login system
* Forums for people to talk about recipes and health
* A tracker that keeps track of your meals, calorie intake, etc
|
winning
|
## Inspiration
As the demand for developers increases worldwide, providing high quality learning resources is critical for beginners in coding. However, the majority of programming resources are written in English, which may introduce a potential language barrier for individuals that do not have English as a first language. Incode was designed to make programming resources more inclusive and accessible to the community of developers around the globe, whether they are beginners or seasoned professionals.
## How it works
Powered by OpenAI's GPT-3, Incode utilizes language translation in combination with code generation to provide programming material in just a few seconds. You can provide a command in any spoken language and our application will generate code in response while providing helpful comments and variable names in the same language that you prompted with!
### Example
We want to generate C code to solve the Dining Philosophers problem. The below generates C code in response to the following prompt in French:
Écrire un programme pour résoudre le problème des philosophes qui dînent.
*Response*
```
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define N 5
sem_t fourchettes[N];
void *philosophe(void *num)
{
int i = *(int *)num;
int gauche = (i + N - 1) % N;
int droite = (i + 1) % N;
while (1)
{
sem_wait(&fourchettes[gauche]);
sem_wait(&fourchettes[droite]);
printf("Le philosophe %d mange\n", i);
sleep(1);
sem_post(&fourchettes[gauche]);
sem_post(&fourchettes[droite]);
}
}
int main()
{
pthread_t tid[N];
int i;
for (i = 0; i < N; i++)
sem_init(&fourchettes[i], 0, 1);
for (i = 0; i < N; i++)
pthread_create(&tid[i], NULL, philosophe, &i);
for (i = 0; i < N; i++)
pthread_join(tid[i], NULL);
return 0;
}
```
## Tech Stack
### Frontend
The Frontend was developed using React and Typescript.
### Backend
The Backend was developed using Flask and Python. In addition, the models that we used for language translation and code generation are from [OpenAI's GPT-3](https://beta.openai.com/docs/models/gpt-3). Finally, we deployed using Microsoft Azure.
|
## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
|
## Inspiration
Nowadays, we have been using **all** sorts of development tools for web development, from the simplest of HTML, to all sorts of high-level libraries, such as Bootstrap and React. However, what if we turned back time, and relived the *nostalgic*, good old times of programming in the 60s? A world where the programming language BASIC was prevalent. A world where coding on paper and on **office memo pads** were so popular. It is time, for you all to re-experience the programming of the **past**.
## What it does
It's a programming language compiler and runtime for the BASIC programming language. It allows users to write interactive programs for the web with the simple syntax and features of the BASIC language. Users can read our sample the BASIC code to understand what's happening, and write their own programs to deploy on the web. We're transforming code from paper to the internet.
## How we built it
The major part of the code is written in TypeScript, which includes the parser, compiler, and runtime, designed by us from scratch. After we parse and resolve the code, we generate an intermediate representation. This abstract syntax tree is parsed by the runtime library, which generates HTML code.
Using GitHub actions and GitHub Pages, we are able to implement a CI/CD pipeline to deploy the webpage, which is **entirely** written in BASIC! We also have GitHub Dependabot scanning for npm vulnerabilities.
We use Webpack to bundle code into one HTML file for easy deployment.
## Challenges we ran into
Creating a compiler from scratch within the 36-hour time frame was no easy feat, as most of us did not have prior experience in compiler concepts or building a compiler. Constructing and deciding on the syntactical features was quite confusing since BASIC was such a foreign language to all of us. Parsing the string took us the longest time due to the tedious procedure in processing strings and tokens, as well as understanding recursive descent parsing. Last but **definitely not least**, building the runtime library and constructing code samples caused us issues as minor errors can be difficult to detect.
## Accomplishments that we're proud of
We are very proud to have successfully "summoned" the **nostalgic** old times of programming and deployed all the syntactical features that we desired to create interactive features using just the BASIC language. We are delighted to come up with this innovative idea to fit with the theme **nostalgia**, and to retell the tales of programming.
## What we learned
We learned the basics of making a compiler and what is actually happening underneath the hood while compiling our code, through the *painstaking* process of writing compiler code and manually writing code samples as if we were the compiler.
## What's next for BASIC Web
This project can be integrated with a lot of modern features that is popular today. One of future directions can be to merge this project with generative AI, where we can feed the AI models with some of the syntactical features of the BASIC language and it will output code that is translated from the modern programming languages. Moreover, this can be a revamp of Bootstrap and React in creating interactive and eye-catching web pages.
|
partial
|
## Inspiration
Our team members had grandparents who suffered from Type II diabetes. Because of the poor dietary choices they made in their daily lives, they had a difficult time controlling their glucose levels and suffered from severe health complications, which included kidney failure and heart attacks. After considering the problem, we realized that creating something that was easy to use.
## What it does
IntelliFridge recognizes food that is being taken out of the fridge and allows users to see the nutritional value before they decide to consume the item. This is especially helpful for Type II diabetics, who tend to be older people who are unlikely to make an effort to find out what is in their food before eating it. Intellifridge captures an image of the food and runs an ML algorithm to determine what the food is, after which we pull nutritional data from a 3rd party service. The information is displayed on the LCD screen and a recommendation is given to the user. Users can then consider what they have and decide whether or not to eat it.
## How we built it
We used the NXP development kit with Android Things. We used several APIs and machine learning models to create the core functionality.
## Challenges we ran into
We had an extremely hard time with getting Android Things onto our board. Initially we tried with the Raspberry Pi, but realized that we were unable to connect to the venue wifi due to some restrictions. With the NXP board, it took us several tries to setup the Wifi and then start working on the image capture and recognition system and the LCD display.
## Accomplishments that we're proud of
It was cool how we were able to figure out how to use the hardware even though we had no experience whatsoever. Getting over the initial barrier was the hardest but most rewarding part.
## What we learned
None of us had experience with hardware prior to the hackathon, and only one of our team members was experienced in Android development, so all of us ended up learning a good deal about flashing images and working in Android studio.
## What's next for IntelliFridge
We hope to implement a system that can recognize the faces of the users and give them recommendations accordingly. We also want to expand the functionality of our app to include predictive glucose monitoring.
## Domain submission
<http://youbreaderbelieveit.com>
|
## Inspiration
Assistive Tech was our asigned track, we had done it before and knew we could innovate with cool ideas.
## What it does
It adds a camera and sensors which instruct a pair of motors that will lightly pull the user in a direction to avoid a collision with an obstacle.
## How we built it
We used a camera pod for the stick, on which we mounted the camera and sensor. At the end of the cane we joined a chasis with the motors and controller.
## Challenges we ran into
We had never used a voice command system, paired with a raspberry pi and also an arduino, combining all of that was a real challenge for us.
## Accomplishments that we're proud of
Physically completing the cane and also making it look pretty, many of our past projects have wires everywhere and some stuff isn't properly mounted.
## What we learned
We learned to use Dialog Flow and how to prototype in a foreign country where we didn't know where to buy stuff lol.
## What's next for CaneAssist
As usual, all our projects will most likely be fully completed in a later date. And hopefully get to be a real product that can help people out.
|
## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
|
partial
|
## Inspiration
Making learning fun for children is harder than ever. Mobile Phones have desensitized them to videos and simple app games that intend to teach a concept.
We wanted to use Projection Mapping and Computer Vision to create an extremely engaging game that utilizes both the physical world, and the virtual. This basic game intends to prep them for natural disasters through an engaging manner.
We think a slightly more developed version would be effective in engaging class participation in places like school, or even museums and exhibitions, where projection-mapping tech is widely used.
## What it does
The Camera scans for markers in the camera image, and then uses the markers position and rotation to create shapes on the canvas. This canvas then undergoes an affine transformation and then gets outputted by the projector as if it were an overlay on top of any object situated next to the markers. This means that moving the markers results in these shapes following the markers' position.
## How the game works
When the game starts, Melvin the Martian needs to prepare for an earthquake. In order to do so you need to build him a path to his First Aid Kit with your Blocks (that you can physically move around, as they are attached to markers). After he gets his first Aid kit, you need to build him a table to hide under, before the earthquake approaches (again, using any physical objects attached to markers). After he hides, You Win!
## How I built it
I began by trying to identify the markers - for which there was an already implemented library that required extensive tuning to get working right. I then made the calibration process, which took three points from the initial, untransformed camera image, and the actual location of these three points on the projector screen. This automatically created a transformation matrix that I then applied to every graphic I rendered (eg. the physical blocks). After this, I made the game, and used the position of the markers to determine is certain events were satisfied, which decided whether the game would progress, or wait until it received the correct input.
## Challenges I ran into
It was very difficult to transform the camera's perspective (which was at a different frame of reference to the projector's) to the projector's perspective. Every camera image had undergone some varying scale, rotation and translation, which require me to create a calibration program that ran at the start of the program's launch.
## Accomplishments that I'm proud of
Instead of relying wholly on any library, I tried my best to directly manipulate the Numpy Matrices in order to achieve transformation effects referred to previously. I'm also happy that I was able to greatly speed up camera-projector frame calibration, which began taking around 5 minutes, and now takes about 15-20 seconds.
## What I learned
I learnt a great deal about Affine Transformations, how to decompose a transformation matrix into its scale, rotation and translation values. I also learnt the drawbacks of using more precise markers (eg. April tags, or ARUCO tags) as opposed to something much simpler, like an HSV color & shape detector.
## What's next for Earthquake Education With Projection Mapping and CV
I want to automate the calibration process, so it requires no user input (which is technically possible, but is prone to error and requires knowledge about the camera being used). I also want to get rid of the ARUCO tags entirely, and instead use the edges of physical objects to somehow manipulate the virtual world.
|
## Inspiration
Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on.
## What it does
StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with!
## How we built it
We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort.
## Challenges we ran into
The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle.
## Accomplishments that we're proud of
That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features.
## What we learned
A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product.
## What's next for StickyAR
StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
|
## What is 'Titans'?
VR gaming shouldn't just be a lonely, single-player experience. We believe that we can elevate the VR experience by integrating multiplayer interactions.
We imagined a mixed VR/AR experience where a single VR player's playing field can be manipulated by 'Titans' -- AR players who can plan out the VR world by placing specially designed tiles-- blocking the VR player from reaching the goal tile.
## How we built it
We had three streams of development/design to complete our project: the design, the VR experience, and the AR experience.
For design, we used Adobe Illustrator and Blender to create the assets that were used in this project. We had to be careful that our tile designs were recognizable by both human and AR standards, as the tiles would be used by the AR players to lay our the environment the VR players would be placed in. Additionally, we pursued a low-poly art style with our 3D models, in order to reduce design time in building intricate models and to complement the retro/pixel-style of our eventual AR environment tiles.
For building the VR side of the project, we selected to build a Unity VR application targeting Windows and Mac with the Oculus Rift. One of our most notable achievements here is a custom terrain tessellation and generation engine that mimics several environmental biomes represented in our game as well as integrating a multiplayer service powered by Google Cloud Platform.
The AR side of the project uses Google's ARCore and Google Cloud Anchors API to seamlessly stream anchors (the tiles used in our game) to other devices playing in the same area.
## Challenges we ran into
Hardware issues were one of the biggest time-drains in this project. Setting up all the programs-- Unity and its libraries, blender, etc...-- took up the initial hours following the brainstorming session. The biggest challenge was our Alienware MLH laptop resetting overnight. This was a frustrating moment for our team, as we were in the middle of testing our AR features such as testing the compatibility of our environment tiles.
## Accomplishments that we're proud of
We're proud of the consistent effort and style that went into the game design, from the physical environment tiles to the 3D models, we tried our best to create a pleasant-to-look at game style. Our game world generation is something we're also quite proud of. The fact that we were able to develop an immersive world that we can explore via VR is quite surreal. Additionally, we were able to accomplish some form of AR experience where the phone recognizes the environment tiles.
## What we learned
All of our teammates learned something new: multiplayer in unity, ARCore, Blender, etc... Most importantly we learned the various technical and planning challenges involved in AR/VR game development
## What's next for Titans AR/VR
We hope to eventually connect the AR portion and VR portion of the project together the way we envisioned: where AR players can manipulate the virutal world of the VR player.
|
winning
|
## Inspiration
Have you ever had to awkwardly explain to an older friend or relative what words like "lit", "fire", "cringe" and "poggers" mean? I know I have, far too many times. Sling serves as a bridge between the modern day lexicon and those whose heydays are now in the past (or who are just old at heart), allowing them to fully integrate into and embrace internet culture.
## What it does
Users are prompted to give a sentence or phrase as input to the website. The interface is intentionally left simple and easy to navigate, to help improve accessibility, especially for those who may not be well versed with the internet or technology. After giving this input, the user submits it and is given an evaluation of the meaning behind the input they gave, highlighting both the subject of the text as well as the general sentiment behind it.
## How we built it
Developing the backend of Sling actually required two NLP models both using sentiment analysis. The first was used to develop a training dataset of slang terminology to be used for training the second. This was done by scraping over 60000 words and definitions from Urban Dictionary. The highest ranked definition was then used as the de facto definition upon which to conduct sentiment analysis. The sentiment analysis score of the definition was then used as the sentiment analysis score of the word, thus creating a thorough dataset of slang terminology. A second sentiment analysis model was then trained upon this dataset.
The web application was a little easier to implement. It was built using Flask, HTML, CSS and Javascript. There are several functions written in Python designed to make handling the user input a little easier, and Spacy's en\_core\_web\_sm model was used to identify the subject of a given set of text.
## Challenges we ran into
The most significant challenge I ran into developing Sling was the collection of slang data to train the slang model on. While I have used Selenium for web scraping before, navigating through all the different pages to collect as many definitions as possible proved to be a hassle. Also, not all of the words and definitions included on Urban Dictionary are completely appropriate. I tried as best as I could to remove these from the training data to ensure this project is SFW.
## Accomplishments that we're proud of
I'm proud of how I was able to develop the slang model. It was a unique challenge to not only collect the data but to also deploy it, so I'm happy I was able to work it out in the end.
## What's next for Sling
The models developed for use with Sling currently only work effectively with short, simple phrases. Longer inputs require more and more time to evaluate, hindering the user experience. More research is required to optimize the models to generate quicker outputs. While this is not a mission critical application of machine learning, it is still a net benefit to the user to give these outputs in a quick manner.
|
## Inspiration
We love cats and we love wizards! But above all, we love a cool game with an interesting concept. For our game, Wizard Cats, we wanted to explore the intersection between fighting games and drawing games.
## What it does
Players face off in a 1v1 duel, where the main gimmick is the spell-drawing feature. By drawing different symbols with their mouse, players can cast spells to do a variety of actions.
## How we built it
Wizard Cats was built using Phaser 3 and Firebase.
## Challenges we ran into
Learning and bugs.
## Accomplishments that we're proud of
We are proud of the cutesy presentation and everything we managed to finish. Although our final product is by no means purrfect, we managed to overcome quite a few hurdles and learn a ton along the way.
## What we learned
As a team with no experience using Phaser or Firebase before, not only did we learn how to utilize both of these technologies, but also how to use both simultaneously in an arcade-style real-time web game.
## What's next for Wizard Cats
In the future, Wizard Cats plans to implement more spells and unique drawing related features, such as drawing platforms. Additionally, Wizard Cats plans to continue to improve its user experience through improved visuals and animations.
|
## Inspiration
Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application.
## What it does
InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations.
## How I built it
In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API.
The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS.
## Challenges I ran into
"Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly.
## Accomplishments that I'm proud of
I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully.
## What I learned
I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch.
## What's next for InterPrep
I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project!
|
losing
|
## Introduction
LegalAId is an AI-powered legal assistant with a mission to organize the world's information and make it universally accessible. Our goal is to provide fast and efficient access to legal aid, ensuring that people have the information they need to understand and assert their rights.
## How it works
Interact with an AI chatbot trained with legal information using Google AI PaLM 2 LLM through Google Vertex AI to assist with infrastructure and operations.
## Accomplishments that we're proud of
1. Training using Google Cloud PaLM
2. Connecting the backend to the frontend
3. Google Maps API with React
|
EcoVillage is a lifestyle tracking app for carbon emissions that tracks all aspects of your day. EcoVillages enables individuals to manage their impact on the environment in a fun, intuitive, and impactful way.
## Inspiration
We used the name Ecovillage, a model of living that looks at sustainability in four dimensions: social, economic, environmental and cultural.
Our app applies these principles to life in urban communities using gamification to incentivize positive change and environmental education. We want to make offsetting your carbon footprint seamless and unobstructive, while giving you a visual representation of your ecological impact. Too often people are bombarded with environmental facts and figures that simply cannot be processed by the human brain. EcoVillage turns that strategy on its head by showing you a virtual village, and the impact your carbon emissions have on it.
## What it does
Not only does our application create awareness, but it makes offsetting carbon emissions, through different channels (like donating resources to plant trees, etc. - Payments supported securely using Square), seamless. It also encourages a positive community by competing with your friends to see who has the lowest carbon footprint.
## How we built it
We built a two-part product that is composed of an application built on Flutter (a cross-platform framework to create mobile, web, and desktop applications with a single source code) to track an individual's carbon emission created by our daily commute and travels. We use the Google Maps API along with a Flutter GeoLocation library to track coordinate displacement and calculate the distance in meters, which is then mapped to an estimate of carbon emission produced per kilometer, on average, by vehicles per year. The second component is a chrome extension that scrapes your online cart while shopping online to estimate a carbon emission based on how the products in your cart are produced and the packaging they come in. This chrome extension is interactive and clear to see and it integrates with our application to make an aggregated carbon emission count.
## Challenges we ran into
Building the application in a language we had never used before (Dart) proved to have its own challenges. From object reference to promise handling, we ran into a lot of programmatic issues while also facing the challenge of understanding a new language. In addition, for our Chrome Extension, having a dynamic DOM from websites proved that the web-scraping had to be robust enough to handle these dynamic changes.
## Accomplishments that we're proud of
We all programmed and worked with technologies that we had never been exposed to before: Flutter/Dart (referencing objects and promise handling), Chrome Extensions, GCP, and even our own designer created wireframes for an application for the first time! Having this whole UI/UX translation process into code was a challenge in itself, so beyond the technical skills, our collaboration and teamwork was something we are extremely proud of.
## What we learned
We learned how having a clear long term goal is good, but unattainable without clear, measurable, and independently do-able tasks. We also learned how important and valuable it is as a team-player to be flexible in the tasks that you have to work in, and having that ability to adapt and jump from task to task, across different technologies, based on the team needs, is essential.
## What's next for EcoVillage
What's next for us? A greener world! We want to encourage people to be aware of their carbon footprint, but awareness, today, is not enough. We allow people to take action, and having this product in the market will help people do that, without getting out of their way or in an inconvenient way. We allow them to make a difference in a seamless, simple, and interactive way.
|
## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues
|
partial
|
## Inspiration
Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again?
There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset.
We want to use technology to elevate the world's consciousness around their personal finance.
## What it does
Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life.
It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth.
Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending.
## How we built it
The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE
Then we took it to Figma where we brainstormed and completed design flows for our prototype -
Then we started working on the App-
**Frontend**
* React.
**Backend**
* Authentication: Auth0
* Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase
* Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon
## Challenges we ran into
The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic.
## What we learned
We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data.
## What's next for Where’s my money?
We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc.
Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
|
## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
|
## Inspiration
There currently is no way for a normal person to invest in bitcoin or cryptocurrencies without taking on considerable risk.
Even the simplest solutions, Coinbase, requires basic knowledge of the crypto market and what to buy. If you're someone who's interested in bitcoin and cryptocurrencies, care about your assets, yet don't have the time to learn about crypto investing, it can be very challenging to invest in cryptocurrencies. There's no reason why those investments shouldn't be accessible.
Why not leave the buying and selling up to professionals who will manage your money for you? All you have to do is select a weekly deposit amount and a risk portfolio.
## What it does
We make investing in crypto as easy as venmo. Just one click.
Slide the weekly investment amount to view your predicted earnings with a certain risk portfolio over anytime from one month to ten years. Find an investment amount that you feel comfortable with. And just click "Get Started."
We spent considerable time designing a very simple UI. Our on-boarding processing makes the app easy to understand, and introduces users to the product well.
## What we (didn't) use
We decided to do all of our designing from scratch in order to fully optimize the design for our specific product. All of our designs were done on our own with NO libraries. We created a graph class to custom make our graph and we coded everything solo.
The only libraries we used were for the backend.
## Most Challenging
Our app is linked with Stripe payments and has a full Firebase backend. Setting up the Firebase backend to efficiently communicate with our front end was quite difficult.
We also spent A LOT of time creating a great UI, and more importantly, a great UX for the user. We made the app incredibly simple to use.
## What I learned
We thought submissions were at 10am not 9am. We should be more proactive about that.
|
winning
|
## Inspiration
While taking a Korean class last semester, we realized there was no easy way to learn numbers in other languages. The only real method was to use flashcards to remember each word, but this did not prepare learners for real-life examples where any combination of numbers was possible.
## What it does
This problem led to the creation of NumSage, an interactive web program that makes learning numbers more fun and effective! Learners challenge themselves to solve as many numbers under a fixed time period, with both real-time and post-game feedback for them to improve on.
## How we built it
This project was built using the Next JS framework, with Tailwind for styling and ... for multiplayer!!
## Challenges we ran into
We had to look for a multiplayer library and ultimately settled with [Colyseus](https://docs.colyseus.io/). It was tough to handle a shared session state from multiple clients simultaneously. However, our prior knowledge of networking concepts helped us set up a functional multiplayer experience.
## Accomplishments that we're proud of
This was the first time we had ever created a multiplayer project, and we are thrilled that we managed to get it working within the short timeframe of a hackathon.
## What we learned
## What's next for NumSage
We are excited to bring NumSage to new levels even beyond this hackathon. We aim to consult with language learners and teachers in our university for feedback to bring NumSage to an ever-increasing pool of languages. We also aim to implement audio functionality for NumSage, for learners to challenge themselves further by testing themselves with audio clips instead of looking at their target text. Finally, we also aim to work with language teachers to create custom learning content for NumSage, such that it truly becomes a one-stop shop for all a language learner's needs.
|
## Inspiration
Hiring an athletic trainer is expensive and logistically troublesome. But for our beloved friends who're insanely into playing sports, and our family who're aspiring to improve their performance, perhaps playing as a starter in a varsity team or simply challenging themselves, there is simply no other way than hiring an hourly-rating athletic trainer. We want to say goodbye to the hassle and expense and provide you with a more personalized, professional, and wholistic fitness journey.
## What it does
With Put Me In – the world's first personal coach on your arm, adaptable to multiple sports – you have professional guidance at your fingertips. Our smart sleeve, driven by cutting-edge machine learning, assesses your movements across sports like basketball and weightlifting, offering personalized insights for improvement. But it doesn't stop there – it also tracks your progress, ensuring safer and more effective workouts.
## Market sizing & demand
Every year, there are approximately $5 billion (TAM) spent on hiring an athletic trainer from amateur and other levels of non-professional athletes, and their average expense hourly is about $50-150. We've connected with 30 athletes on different levels, ranging from college varsity team members playing below NCAA D2 to amateur fitness athletes, and researched that their willingness to pay for training purposes is about maxi 200 USD per month. Taking into other necessary data accounts, we have a Serviceable Available Market (SAM) of about 1 billion per year.
## How we built it
We stitched together a sleeve that places 3 IMU sensors on the bicep, forearm, and hand. One of the greatest challenges in tracking sports performance with sensors is that labelled data can be costly to acquire, and so it is vital to find efficient algorithms to classify or analyze these types of movements and sensor data. We developed a dynamic time warping-based time series classification algorithm that is label-efficient, highly generalizable across sport modalities and different individuals, and lightweight enough to run on a Raspberry Pi that we have detachable from our wearable sleeve.
To get all these relevant data and insights to our users, we built a web app in Reflex. When you put on a sleeve and begin shooting, the data from the sensor gets transferred to our web app. There, we display a visualization of your practice movement (shot or weightlift). This visualization is rendered via a custom kinematics-based positional updating that we do to track relative positioning of the sensors based on accelerometer and gyro axes. Simultaneously, based on our machine learning motion analysis and DTW-based classification of your athletic form, we will display feedback on how your form could be improved. The web app also allows users to log into their accounts for an existing wearable such as a Garmin watch, and include that supplementary data. Finally, the web app also includes a viewer for video tutorials from Youtube trainers (which we envision could one day be a suite of “Put Me In” personal trainers and their video lessons.)
## What's next for Put Me In, Coach
Our vision for what's coming next? To democratize elite fitness technology for everyone. We want to use our model and expand it to a much greater variety of sports from arm-involved sports to lower-limbed focused programs such as football, etc. Through an affordable monthly subscription, equivalent to just a few hours with a traditional trainer, Put Me In brings professional coaching within reach. And with Put Me In, training becomes not only professional but enjoyable too. Don't wait – elevate your game with Put Me In today.
|
## Inspiration
Inspired by the learning incentives offered by Duolingo, and an idea from a real customer (Shray's 9 year old cousin), we wanted to **elevate the learning experience by integrating modern technologies**, incentivizing students to learn better and teaching them about different school subjects, AI, and NFTs simultaneously.
## What it does
It is an educational app, offering two views, Student and Teacher. On Student view, compete with others in your class through a leaderboard by solving questions correctly and earning points. If you get questions wrong, you have the chance to get feedback from Together.ai's Mistral model. Use your points to redeem cool NFT characters and show them off to your peers/classmates in your profile collection!
For Teachers, manage students and classes and see how each student is doing.
## How we built it
Built using TypeScript, React Native and Expo, it is a quickly deployable mobile app. We also used Together.ai for our AI generated hints and feedback, and CrossMint for verifiable credentials and managing transactions with Stable Diffusion generated NFTs
## Challenges we ran into
We had some trouble deciding which AI models to use, but settled on Together.ai's API calls for its ease of use and flexibility. Initially, we wanted to do AI generated questions but understandably, these had some errors so we decided to use AI to provide hints and feedback when a student gets a question wrong. Using CrossMint and creating our stable diffusion NFT marketplace was also challenging, but we are proud of how we successfully incorporated it and allowed each student to manage their wallets and collections in a fun and engaging way.
## Accomplishments that we're proud of
Using Together.ai and CrossMint for the first time, and implementing numerous features, such as a robust AI helper to help with any missed questions, and allowing users to buy and collect NFTs directly on the app.
## What we learned
Learned a lot about NFTs, stable diffusion, how to efficiently prompt AIs, and how to incorporate all of this into an Expo React Native app.
Also met a lot of cool people and sponsors at this event and loved our time at TreeHacks!
## What's next for MindMint: Empowering Education with AI & NFTs
Our priority is to incorporate a spaced repetition-styled learning algorithm, similar to what Anki does, to tailor the learning curves of various students and help them understand difficult and challenging concepts efficiently.
In the future, we would want to have more subjects and grade levels, and allow the teachers to input questions for the student to solve. Another interesting idea we had was to create a mini real-time interactive game for students to play among themselves, so they can encourage each other to play between themselves.
|
losing
|
# GREENTRaiL
## Inspiration
Hiking has exploded in popularity since the pandemic with more than 80 million Americans hiking in 2022 alone. There are many large mental and physical health benefits to hiking, however it can be daunting to select routes as a beginner. It is difficult to imagine how a route would feel before going on it, especially for those without past input.
In addition, many times hikers also don't take into account wildlife when choosing routes. Animals such as elks have also been shown to change behavior up to 1 mile away from hiking trails, and this has far reaching implications to the greater biosphere. With climate change being a threat to traditional migration paths, increased human activity can be detrimental to the already fragile patterns.
GREENTRaiL is an app that will give users personalized recommendations and help make hiking more eco-friendly.
## What it does
Using biometric and environmental data, GREENTRaiL recommends users hiking trails based on average statistics of others who have completed the hike and synthesizes difficulty ratings. It will also use migratory and wildlife data to suggest less obtrusive hikes to local migratory patterns.
## How we built it
UI/UX prototyping was sketched first traditionally, and then brought into Procreate to develop final color and brand identity. High fidelity wire-framing was then done on Figma, and then the final UI/UX was refined using those prototypes.
GREENTRaiL was coded using Swift and integrates terraAPI to get wearable data and aggregate data of all the people who have taken the past trail.
## Challenges we ran into
All of us were new to Swift, and one of us fully couldn't run Xcode on their computer. Our UX/UI designer had also never designed for IOS before either, so there was a bit of a learning curve. Our coders ran into a lot difficulty integrating the terra API into the code, as well as general problems with front end and back end integration.
## What we learned
We learned how to develop using Swift, prototype for IOS on Figma and integrate terraAPI.
## What's next for GREENTRaiL
Future areas of development include syncing with other nature apps such as iNaturalist's API and AllTrails to give the user even more comprehensive data on wildlife and qualitative description.
## Figma Design
<https://www.figma.com/file/S9wlv984UYBPaX8IiPqRJe/greentrAIl?type=design&node-id=2%3A87&mode=design&t=mIexhgpxiinAegGd-1>
## Technologies Used
     
|
# Scenic
## 30 second pitch
A non-linear navigation model for exercise that maximizes air quality and reduces noise pollution. Sometimes it's not always about getting there fast. Want directions that take an extra 10 minutes, but cut your air and noise pollution intake in half? We've got your *Scenic* route.
## Story
Everyday, John and I ride our bikes to campus. We're both new to the city and finding a pleasant route is not always easy. Noise, air quality, and traffic all cause stress. Noticing a lack of solutions on the market, we came up with a better solution. From our conversations, Scenic is born.
## Technical approach
The following is our idealized algorithm. Given time constraints, our focus was on a thoughtful conversation around the story and what the Scenic app would look like.
Building a non-linear routing algorithm is a multi-step process. First, we need to learn about our user. Relying on a chatbot conversation based on-boarding process, we get to know our user's preferences. Are you okay with a **10%** longer route? How about a **30%** longer route? Do you usually **bike**, or are you a **runner**? This data is stored and then later used in our route ranking algorithm.
At the root of all navigation models is a graph of road vectors. For our application, we use OpenStreetMap (OSM) data loaded into a PostGIS enabled Postgresql database to satisfy this requirement. Next, our routing algorithm requires consuming a collection of historical sources for route segment classification and scoring. Relatively weighting these data sources allows us to compute a Scenic score and create a grid index in Postgresql. Then, at the time of navigation, we run a search that optimizes routes based on our historical grid Scenic scores and returns the top 20 route options. Before returning results to the user, we query live data sources (traffic, AirNow.gov, etc.) to create a secondary on-the-fly ranking of the top 20 routes. Once this ranking computation is complete, we send the top 3 route options to the user's client app.
Once the user selects a route, we navigate the user either directly in their app, or via their Apple watch, Garmin, or Android Wear device. At the end of the trip, we show a visualization comparing how much air and noise pollution they avoided.
This model works well for developed countries and for cities with a rich network of accessible sensors. For developing countries (often areas where we see some of the worst pollution) this default ranking algorithm falls short. Fortunately, we have a novel solution. As we develop road segment scores in data-rich locales, we feed common trait data into a neural net classifier, allowing us to create a classification model for cities with low-fidelity data. This approach allows us to create Scenic scores for cities around the world.
## Data sources
* Google traffic API
* Open Street Map (OSM)
+ road width
+ road type
+ road direction
* Darksky.net
+ Current temp
+ Current wind vector
* Expert users
+ uploaded known routes (segmented to help classify each road segment)
* IoT city sensors
+ Microphones
+ Air quality
* Machine learning
+ Classification trained on other cities with rich data sources
* AirQualityNow.com
+ PM2.5, PM5 levels
* Here.com
+ Fastest route
+ Map baselayers
* Open-elevation.com
|
## Inspiration
Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's.
## What it does
This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free.
## How we built it
Through the Amazon Alexa builder, Google API, and AWS.
## Challenges we ran into
We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon.
## Accomplishments that we're proud of
Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for.
## What we learned
We learned how to use AWS, work with Node.js, and how to use Google APIs.
## What's next for Bus Pal
Improve the text ability of the skill, and enable calendar integration.
|
partial
|
## Inspiration
How did you feel when you first sat behind the driving wheel? Scared? Excited? All of us on the team felt a similar way: nervous. Nervous that we'll drive too slow and have cars honk at us from behind. Or nervous that we'll crash into something or someone. We felt that this was something that most people encountered, and given the current technology and opportunity, this was the perfect chance to create a solution that can help inexperienced drivers.
## What it does
Drovo records average speed and composite jerk (the first derivative of acceleration with respect to time) over the course of a driver's trip. From this data, it determines a driving grade based on the results of a SVM machine learning model.
## How I built it
The technology making up Drovo can be summarized in three core components: the Android app, machine learning model, and Ford head unit. Interaction can start from either the Android app or Ford head unit. Once a trip is started, the Android app will compile data from its own accelerometer and multiple features from the Ford head unit which it will feed to a SVM machine learning model. The results of the analysis will be summarized with a single driving letter grade which will be read out to the user, surfaced to the head unit, and shown on the device.
## Challenges I ran into
Much of the hackathon was spent learning how to properly integrate our Android app and machine learning model with the Ford head unit via smart device link. This led to multiple challenges along the way such as figuring out how to properly communicate from the main Android activity to the smart device link service and from the service to the head unit via RPC.
## Accomplishments that I'm proud of
We are proud that we were able to make a fully connected user experience that enables interaction from multiple user interfaces such as the phone, Ford head unit, or voice.
## What I learned
We learned how to work with smart device link, various new Android techniques, and vehicle infotainment systems.
## What's next for Drovo
We think that Drovo should be more than just a one time measurement of driving skills. We are thinking of keeping track of your previous trips to see how your driving skills have changed over time. We would also like to return the vehicle data we analyzed to highlight specific periods of bad driving.
Beyond that, we think Drovo could be a great incentive for teenage drivers to be proud of good driving. By implementing a social leaderboard, users can see their friends' driving grades, which will in turn motivate them to increase their own driving skills.
|
## Inspiration
As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities.
## What it does
FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless.
## How we built it
We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's.
## Challenges we ran into
IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url.
## Accomplishments that we're proud of
We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement.
## What's next for FaceLyft
While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases.
|
## Inspiration
The inspiration behind HumanFT comes from the desire to revolutionize the way people receive feedback and approach personal development. The project aims to harness the power of advanced technology to provide individuals, educational institutions, and organizations with a comprehensive feedback system that can drive positive change and improvement in various aspects of life.
## What it does
HumanFT serves as a multifaceted platform that collects, analyzes, and delivers feedback to users across different domains. It offers a central hub for personal development, empowers educators and students to enhance the learning experience, and enables organizations to optimize workplace performance. By leveraging data-driven insights and gamification, HumanFT engages users in a meaningful journey of self-improvement.
## How we built it
HumanFT is built upon a foundation of cutting-edge technology, including machine learning and AI algorithms. It combines a user-friendly interface with robust data analysis to ensure efficient feedback delivery. Privacy and security are fundamental aspects of its construction, ensuring that user data remains confidential and protected.
## Challenges we ran into
Developing HumanFT presented several challenges, including the integration of gamification elements, the development of secure data handling processes, and the creation of a dynamic and engaging user experience. Overcoming these obstacles required a dedicated team effort and continuous innovation.
## Accomplishments that we're proud of
One of our proudest accomplishments with HumanFT is the creation of a thriving community of individuals who are passionate about personal development and feedback. We've also successfully integrated gamification elements to keep users engaged and motivated on their journey towards self-improvement.
## What we learned
Throughout the development of HumanFT, we've learned the significance of personalized feedback in driving positive change. We've also gained valuable insights into the power of data-driven recommendations and the importance of maintaining user privacy and security.
## What's next for HumanFT
The future of HumanFT holds exciting possibilities. We aim to expand its reach and impact, incorporating more domains, refining the user experience, and continuously improving the AI algorithms that drive feedback and recommendations. Additionally, we plan to further strengthen the HumanFT community, fostering connections and support among like-minded individuals on their journey of self-improvement.
|
winning
|
## Inspiration
The intricate nature of diagnosing and treating diseases, combined with the burdensome process of managing patient data, drove us to develop a solution that harnesses the power of AI. Our goal was to simplify and expedite healthcare decision-making while maintaining the highest standards of patient privacy.
## What it does
Percival automates data entry by seamlessly accepting inputs from various sources, including text, speech-to-text transcripts, and PDFs. It anonymizes patient information, organizes it into medical forms, and compares it against a secure vector database of similar cases. This allows us to provide doctors with potential diagnoses and tailored treatment recommendations for various diseases.
## How we use K-means clustering?
To enhance the effectiveness of our recommendation system, we implemented a K-means clustering model using Databricks Open Source within our vector database. This model analyzes the symptoms and medical histories of patients to identify clusters of similar cases. By grouping patients with similar profiles, we can quickly retrieve relevant data that reflects shared symptoms and outcomes.
When a new patient record is entered, our system evaluates their symptoms and matches them against existing clusters in the database. This process allows us to provide doctors with recommendations that are not only data-driven but also highly relevant to the patient's unique situation. By leveraging the power of K-means clustering, we ensure that our recommendations are grounded in real-world patient data, improving the accuracy of diagnoses and treatment plans.
## How we built it
We employed a combination of technologies to bring Percival to life: Flask for server endpoint management, Cloudflare D1 for secure backend storage of user data and authentication, OpenAI Whisper for converting speech to text, the OpenAI API for populating PDF forms, Next.js for crafting a dynamic frontend experience, and finally Databricks Open-source for the K-means clustering to identify similar patients.
## Challenges we ran into
While integrating speech-to-text capabilities, we faced numerous hurdles, particularly in ensuring the accurate conversion of doctors' verbal notes into structured data for medical forms. The task required overcoming technical challenges in merging Next.js with speech input and effectively parsing the output from the Whisper model.
## Accomplishments that we're proud of
We successfully integrated diverse technologies to create a cohesive and user-friendly platform. We take pride in Percival's ability to transform doctors' verbal notes into structured medical forms while ensuring complete data anonymization. Our achievement in combining Whisper’s speech-to-text capabilities with OpenAI's language models to automate diagnosis recommendations represents a significant advancement. Additionally, establishing a secure vector database for comparing anonymized patient data to provide treatment suggestions marks a crucial milestone in enhancing the efficiency and accuracy of healthcare tools.
## What we learned
The development journey taught us invaluable lessons about securely and efficiently handling sensitive healthcare data. We gained insights into the challenges of working with speech-to-text models in a medical context, especially when managing diverse and large inputs. Furthermore, we recognized the importance of balancing automation with human oversight, particularly in making critical healthcare diagnoses and treatment decisions.
## What's next for Percival
Looking ahead, we plan to broaden Percival's capabilities to diagnose a wider range of diseases beyond AIDS. Our focus will be on enhancing AI models to address more complex cases, incorporating multiple languages into our speech-to-text feature for global accessibility, and introducing real-time data processing from wearable devices and medical equipment. We also aim to refine our vector database to improve the speed and accuracy of patient-to-case comparisons, empowering doctors to make more informed and timely decisions.
|
## Inspiration
* Smart homes are taking over the industry
* Current solutions are WAY too expensive(almost $30) for one simple lightbulb
* Can fail from time to time
* Complicated to connect
## What it does
* It simplifies the whole idea of a smart home
* Three part system
+ App(to control the hub device)
+ Hub(used to listen to the Firebase database and control all of the devices)
+ Individual Devices(used to do individual tasks such as turn on lights, locks, etc.)
* It allows as many devices as you want to be controlled through one app
* Can be controlled from anywhere in the world
* Cheap in cost
* Based on usage data, provides feedback on how to be more efficient with trained algorithm
## How I built it
* App built with XCode and Swift
* Individual devices made with Arduino's and Node-MCU's
* Arduino's intercommunicate with RF24 Radio modules
* Main Hub device connects to Firebase with wifi
## Challenges I ran into
* Using RF24 radios to talk between Arduinos
* Communicating Firebase with the Hub device
* Getting live updates from Firebase(constant listening)
## Accomplishments that I'm proud of
* Getting a low latency period, almost instant from anywhere in the world
* Dual way communication(Input and Output Devices)
* Communicating multiple non-native devices with Firebase
## What I learned
* How RF24 radios work at the core
* How to connect Firebase to many devices
* How to keep listening for changes from Firebase
* How to inter-communicate between Arduinos and Wifi modules
## What's next for The Smarter Home
* Create more types of devices
* Decrease latency
* Create more appropriate and suitable covers
|
## Inspiration
I like web design, I like 90's web design, and I like 90's tech. So it all came together very naturally.
## What it does
nineties.tech is a love letter to the silly, chunky, and experimental technology of the 90s. There's a Brian Eno quote about how we end up cherishing the annoyances of "outdated" tech: *Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature.* I think this attitude persists today, and making a website in 90s web design style helped me put myself in the shoes of web designers from 30 years ago (albeit, with flexbox!)
## How we built it
Built with Sveltekit, pure CSS and HTML, deployed with Cloudflare, domain name from get.tech.
## Challenges we ran into
First time using Cloudflare. I repeatedly tried to deploy a non-working branch and was close to tears. Then I exited out to the Deployments page and realized that the fix I'd thrown into the config file actually worked.
## Accomplishments that we're proud of
Grinded out this website in the span of a few hours; came up with a cool domain name; first time deploying a website through Cloudflare; first time using Svelte.
## What we learned
My friend Ivan helped me through the process of starting off with Svelte and serving sites through Cloudflare. This will be used for further nefarious and well-intentioned purposes in the future.
## What's next for nineties.tech
User submissions? Longer, better-written out entries? Branch the site out into several different pages instead of putting everything into one page? Adding a classic 90's style navigation sidebar? Many ideas...
|
winning
|
## Inspiration
deez nuts
## What it does
deez nuts
## How we built it
deez nuts
## Challenges we ran into
deez nuts
## Accomplishments that we're proud of
deez nuts
## What we learned
deez nuts
## What's next for deez nuts
deez nuts
|
## Inspiration
## What it does
## How we built it
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
## What's next for Clout-Jar
|
## Where we got the spark?
**No one is born without talents**.
We all get this situation in our childhood, No one gets a chance to reveal their skills and gets guided for their ideas. Some skills are buried without proper guidance, we don't even have mates to talk about it and develop our skills in the respective field. Even in college if we are starters we have trouble in implementation. So we started working for a solution to help others who found themselves in this same crisis.
## How it works?
**Connect with neuron of your same kind**
From the problem faced we are bridging all bloomers on respective fields to experts, people in same field who need a team mate (or) a friend to develop the idea along. They can become aware of source needed to develop themselves in that field by the guidance of experts and also experienced professors.
We can also connect with people all over globe using language translator, this makes us feels everyone feel native.
## How we built it
**1.Problem analysis:**
We ran through all problems all over the globe in the field of education and came across several problems and we chose a problem that gives solution for several problems.
**2.Idea Development:**
We started to examine the problems and lack of features and solution for topic we chose and solved all queries as much as possible and developed it as much as we can.
**3.Prototype development:**
We developed a working prototype and got a good experience developing it.
## Challenges we ran into
Our plan is to get our application to every bloomers and expertise, but what will make them to join in our community. It will be hard to convince them that our application will help them to learn new things.
## Accomplishments that we're proud of
The jobs which are currently popular may or may not be popular after 10 years. Our World will always looks for a better version of our current version . We are satisfied that our idea will help 100's of children like us who don't even know about the new things in todays new world. Our application may help them to know the things earlier than usual. Which may help them to lead a path in their interest. We are proud that we are part of their development.
## What we learned
We learnt that many people are suffering from lack of help for their idea/project and we felt useless when we learnt this. So we planned to build an web application for them to help with their project/idea with experts and same kind of their own. So, **Guidance is important. No one is born pro**
We learnt how to make people understand new things based on the interest of study by guiding them through the path of their dream.
## What's next for EXPERTISE WITH
We're planning to advertise about our web application through all social medias and help all the people who are not able to get help for development their idea/project and implement from all over the world. to the world.
|
losing
|
# 🎓 **Inspiration**
Entering our **junior year**, we realized we were unprepared for **college applications**. Over the last couple of weeks, we scrambled to find professors to work with to possibly land a research internship. There was one big problem though: **we had no idea which professors we wanted to contact**. This naturally led us to our newest product, **"ScholarFlow"**. With our website, we assure you that finding professors and research papers that interest you will feel **effortless**, like **flowing down a stream**. 🌊
# 💡 **What it Does**
Similar to the popular dating app **Tinder**, we provide you with **hundreds of research articles** and papers, and you choose whether to approve or discard them by **swiping right or left**. Our **recommendation system** will then provide you with what we think might interest you. Additionally, you can talk to our chatbot, **"Scholar Chat"** 🤖. This chatbot allows you to ask specific questions like, "What are some **Machine Learning** papers?". Both the recommendation system and chatbot will provide you with **links, names, colleges, and descriptions**, giving you all the information you need to find your next internship and accelerate your career 🚀.
# 🛠️ **How We Built It**
While half of our team worked on **REST API endpoints** and **front-end development**, the rest worked on **scraping Google Scholar** for data on published papers. The website was built using **HTML/CSS/JS** with the **Bulma** CSS framework. We used **Flask** to create API endpoints for JSON-based communication between the server and the front end.
To process the data, we used **sentence-transformers from HuggingFace** to vectorize everything. Afterward, we performed **calculations on the vectors** to find the optimal vector for the highest accuracy in recommendations. **MongoDB Vector Search** was key to retrieving documents at lightning speed, which helped provide context to the **Cerebras Llama3 LLM** 🧠. The query is summarized, keywords are extracted, and top-k similar documents are retrieved from the vector database. We then combined context with some **prompt engineering** to create a seamless and **human-like interaction** with the LLM.
# 🚧 **Challenges We Ran Into**
The biggest challenge we faced was gathering data from **Google Scholar** due to their servers blocking requests from automated bots 🤖⛔. It took several hours of debugging and thinking to obtain a large enough dataset. Another challenge was collaboration – **LiveShare from Visual Studio Code** would frequently disconnect, making teamwork difficult. Many tasks were dependent on one another, so we often had to wait for one person to finish before another could begin. However, we overcame these obstacles and created something we're **truly proud of**! 💪
# 🏆 **Accomplishments That We're Proud Of**
We’re most proud of the **chatbot**, both in its front and backend implementations. What amazed us the most was how **accurately** the **Llama3** model understood the context and delivered relevant answers. We could even ask follow-up questions and receive **blazing-fast responses**, thanks to **Cerebras** 🏅.
# 📚 **What We Learned**
The most important lesson was learning how to **work together as a team**. Despite the challenges, we **pushed each other to the limit** to reach our goal and finish the project. On the technical side, we learned how to use **Bulma** and **Vector Search** from MongoDB. But the most valuable lesson was using **Cerebras** – the speed and accuracy were simply incredible! **Cerebras is the future of LLMs**, and we can't wait to use it in future projects. 🚀
# 🔮 **What's Next for ScholarFlow**
Currently, our data is **limited**. In the future, we’re excited to **expand our dataset by collaborating with Google Scholar** to gain even more information for our platform. Additionally, we have plans to develop an **iOS app** 📱 so people can discover new professors on the go!
|
# mindr
mindr is a cross-platform emotion monitoring app, essential for every smart home. The app
is tailored to provide an unintrusive and private solution to keep track of a child's well-being during all those times we can't be with them.
## How it works
The emotion monitoring happens on a device—a laptop, or any IoT device that supports
a camera—locally. Each frame from the camera is processed for emotional content
and severely emotional distress. The emotion processing is facilitated using
OpenCV, TensorFlow, and was trained using a convolutional neural network within
TFLearn. All of the image data is stored locally, to protect a parent's privacy.
The emotional data is then sent to a Django backend web-server where it is parsed
into a PostgreSQL database. The Django web-server then serves analytical data to
a React/Redux single-page web app which provides parents with a clean interface
for tracking their child's emotional behavior through time. Distressing emotional
events are expedited to the parent by sending notifications through SMS or email.
## Who we are
We are a team of upper-division computer science students passionate about creating
useful and well-made applications. We love coffee and muffins.
|
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.

## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.



Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.

>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.


I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial





# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.


# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He
|
partial
|
# Why Universitium?
Looking back to the times when we applied to colleges, we spent a lot of time searching for the universities and keeping track of the due date and essay submissions for each university that may be on different platforms. It was time-consuming to look over many websites for college information; it was especially tiring to apply for colleges at various platforms, not to mention keeping track of what essays needed to be submitted and the application due dates. Our team wanted to simplify this application process and have everything covered for future generations.
Universitium aims to reduce the time spent in the college search and make tracking the application process easier. It is different from other college application platforms as it:
* provides college information that students consider when applying for colleges
* creates a to-do list of the requirements for each university the user is applying to
* recommends colleges to the users according to their applied colleges.
# How We Built it
First, we designed the full stack architecture of Universitium. It is divided into three main components: the frontend, the backend, and the machine learning model. The frontend is connected to the backend via a REST API, and the backend will fetch data from the database and the machine learning model. We also designed the website UI in Figma before implementing the code.
## Frontend
Our UI design was done using Figma, and we implemented the design through React.js, which renders and dispenses HTML and CSS that was written exclusively by our team. We used axios library to fetch data from our REST API and show it on the website.
## Backend
We used Flask and Python for our web backend. We scraped thousands of rows of university data from US News and stored it in MySQL. We also stored user profile information in our MySQL database, then the desired information is retrieved from Flask. The retrieved data will then be sent to the frontend to be rendered and shown on the website.
## Machine Learning Model
Our machine learning model uses a collaborative filtering model that uses college information from US News as training data to recommend universities to users, taking into account over 20 different factors from academic strength to student experience.
# Challenges We Faced
We faced a lot of issues connecting backend to frontend. Having limited experience with axios, it took us a long time to fetch the data from the endpoints in our Flask application.
We also met challenges getting the output of the machine learning model.
# More to Universitium
As of now, Universitium is focused on facilitating an individual in the process of searching and applying for colleges. This means there are currently no methods for users to communicate on our website. In the future, we will build the user base by introducing user interaction. We have plans on adding the role of coaches, where users that are done with college applications can connect with students and provide helpful advice to those currently in their application progress.
Another important feature we will add is information about financial aid and scholarships. We understand that tuition is an important factor that students consider while applying to college, so we wanted our users to have access to this important information as well.
In terms of the structure of Universitium, another feature to improve our website will be to incorporate OAuth in our login authentication process. We will also use the Google API to allow users to edit their essays directly on our website. Other improvements include hosting the database and the website on a server.
# What We Learned
We had a lot of fun building Universitium from scratch. We had ambitious goals for our website but had to cut down many features due to time constraints. It was important for us to have a functioning website before we incorporate our fascinating ideas such as using machine learning to recommend colleges. Nonetheless, after the struggles, we have more confidence working on the full stack implementation and the connection between frontend and backend.
|
## What it does
VibeCheck analyzes and interprets a Spotify user's top 10 tracks of last month to provide a general mood of their listening history and tailor specific words of wisdom based on their music vibe.
## How we built it
Using React frontend and a FastAPI backend, deployed as a decoupled micro-services using Docker. We pulled data from SpotifyAPI using python and utilized numpy to determine the mood of a user's listening history based on danceability, energy and valence.
## Challenges we ran into
Setting up the Docker Compose, coming up with the algorithms to figure out how to categorize the songs into different music vibes and provide messages based on each category.
## Accomplishments that we're proud of
We successfully set up a streamlined development environment using Docker Compose, allowing individual management of services. The segregation of authentication files in the .creds folder enhances security, ensuring that sensitive information remains protected.
## What we learned
How to navigate Spotify Web API, got experience in React, and the practical differences in vector similarity matrix between cosine similarity and Euclidean distance. Applying linear algebra to practical data science was a cool lesson we learned from this project. Most of our team was not familiar with Docker, so we got the chance to explore that and utilize compose functionality.
## What's next for VibeCheck
Expanding the range of analyzed data, providing more statistics and doing a sentiment analysis on the lyrics to provide a more accurate message descriptions.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
losing
|
## Inspiration
We were inspired by our mothers, who are both educators for children. Many people want to know what their children do at school, since at younger ages when parents ask their kids "What did you do at school", the question is rarely met with anything beyond shrugs and incoherent ideas. This responsibility to communicate then falls to teachers. We looked at the other products on the market and thought of a way we could use AI and Machine Learning to automate the process, helping teachers share student's foundational education experiences with their guardians.
## What it does
Our app and camera system, monitors your kids throughout the school day and notifies you when there are noteworthy events with a collection of photos of your student that's been personally curated by our learning system.
## How we built it
We built this technology with Android Studio for mobile app and python for the data processing/machine learning back end. The back end was made with Google Cloud Vision, Sklearn, Gensim, Facebook's bAbI dataset, and communicates to the mobile application via Firebase's Realtime Database.
## Challenges we ran into
We had to make so many parts work fluidly together in a short amount of time. We also ran into some technical challenges that took a couple creative innovations to make it through. Lastly, my computer restarted unexpectedly at least 5 times, probably because I was trying to do so much on it.
## Accomplishments that we're proud of
We are proud that we were able to make a system that will help with parental communication in elementary school classrooms and hopefully in the future offset some of the major work done by elementary and pre-k teachers (who are much in need).
## What we learned
A lot. We can't wait to tell you, but here are some hints (NLP, App Dev, and Family).
## What's next for xylophone
We hope to finish up some of the fixes, beta test it as a project for our schools elementary schools, as well as learn about Users Experience when using the app in the real world.
|
## Inspiration
We got the inspiration while solving some math questions. We were solving some of the questions wrong, but couldn't get any idea in what step we were doing wrong. Online, it was even worse: there were only videos, and you had to figure all of the rest out by yourself. The only way to see exactly where you did a mistake was to have a teacher with you. How crazy! Then, we said, technology could help us solve this, and it could even enable us to build a platform that can intelligently give the most efficient route of learning to each person, so no time would be wasted solving the same things again and again!
## What it does
The app provides you with some questions (currently math) and a drawing area to solve the question. While you are solving, the app can compare your handwritten solution steps with the correct ones and tell if your step was correct or false. Even more, since it also has educational content built-in, it can track and show you more of the questions that you did incorrectly, and even questions including steps you did incorrect while solving other questions.
## How we built it
We built the recognition part using the MyScript math handwriting recognition API, and all the tracking, statistics and other stuff using Swift, UIKit and AVFoundation.
## Challenges we ran into
We ran into lots of challenges while building all the data models, since each one is interconnected with the others, and all the steps, questions, tags, etc. make up quite a large variety of data. With the said variety of data, also came a torrent of user interface bugs, and it took *some* perseverance to solve them all as quickly as possible. Also, probably the one of the biggest challenges we dealt with was to deal with the IDE itself crashing :)
## Accomplishments that we're proud of
We are proud of the data collection and recommendation system that we built from the ground up (entirely in Swift!), and the UI that we built, since even though the app doesn't have a large quantity of educational content inside yet, we built it with the ability to expand easily as content gets added, in mind.
## What we learned
The biggest thing we learned was how to build a data set large enough to give personalized recommendations, and also how to divide and conquer it before it gets too complex. We also learned to go beyond what the documentation on the internet offers while debugging, and to solve things by going from examples, without documentation on how to implement.
## What's next for Tat
We think that Tat has quite a potential to redefine education for years to come if we can build more upon it, with more content, more data and even the possibility of integrating crowd-trained AI.
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
losing
|
Live Demo Link: <https://www.youtube.com/live/I5dP9mbnx4M?si=ESRjp7SjMIVj9ACF&t=5959>
## Inspiration
We all fall victim to impulse buying and online shopping sprees... especially in the first few weeks of university. A simple budgeting tool or promising ourselves to spend less just doesn't work anymore. Sometimes we need someone, or someone's, to physically stop us from clicking the BUY NOW button and talk us through our purchase based on our budget and previous spending. By drawing on the courtroom drama of legal battles, we infuse an element of fun and accountability into doing just this.
## What it does
Dime Defender is a Chrome extension built to help you control your online spending to your needs. Whenever the extension detects that you are on a Shopify or Amazon checkout page, it will lock the BUY NOW button and take you to court! You'll be interrupted by two lawyers, the defence attorney explaining why you should steer away from the purchase 😒 and a prosecutor explains why there still are some benefits 😏. By giving you a detailed analysis of whether you should actually buy based on your budget and previous spendings in the month, Dime Defender allows you to make informed decisions by making you consider both sides before a purchase.
The lawyers are powered by VoiceFlow using their dialog manager API as well as Chat-GPT. They have live information regarding the descriptions and prices of the items in your cart, as well as your monthly budget, which can be easily set in the extension. Instead of just saying no, we believe the detailed discussion will allow users to reflect and make genuine changes to their spending patterns while reducing impulse buys.
## How we built it
We created the Dime Defender Chrome extension and frontend using Svelte, Plasma, and Node.js for an interactive and attractive user interface. The Chrome extension then makes calls using AWS API gateways, connecting the extension to AWS lambda serverless functions that process queries out, create outputs, and make secure and protected API calls to both VoiceFlow to source the conversational data and ElevenLabs to get our custom text-to-speech voice recordings. By using a low latency pipeline, with also AWS RDS/EC2 for storage, all our data is quickly captured back to our frontend and displayed to the user through a wonderful interface whenever they attempt to check out on any Shopify or Amazon page.
## Challenges we ran into
Using chrome extensions poses the challenge of making calls to serverless functions effectively and making secure API calls using secret api\_keys. We had to plan a system of lambda functions, API gateways, and code built into VoiceFlow to create a smooth and low latency system to allow the Chrome extension to make the correct API calls without compromising our api\_keys. Additionally, making our VoiceFlow AIs arguing with each other with proper tone was very difficult. Through extensive prompt engineering and thinking, we finally reached a point with an effective and enjoyable user experience. We also faced lots of issues with debugging animation sprites and text-to-speech voiceovers, with audio overlapping and high latency API calls. However, we were able to fix all these problems and present a well-polished final product.
## Accomplishments that we're proud of
Something that we are very proud of is our natural conversation flow within the extension as well as the different lawyers having unique personalities which are quite evident after using our extension. Having your cart cross-examined by 2 AI lawyers is something we believe to be extremely unique, and we hope that users will appreciate it.
## What we learned
We had to create an architecture for our distributed system and learned about connection of various technologies to reap the benefits of each one while using them to cover weaknesses caused by other technologies.
Also.....
Don't eat the 6.8 million Scoville hot sauce if you want to code.
## What's next for Dime Defender
The next thing we want to add to Dime Defender is the ability to work on even more e-commerce and retail sites and go beyond just Shopify and Amazon. We believe that Dime Defender can make a genuine impact helping people curb excessive online shopping tendencies and help people budget better overall.
|
## Inspiration
We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool.
## What it does
AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures.
The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch.
## How we built it
In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set.
We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features.
## Challenges we ran into
We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time.
It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in.
## Accomplishments that we're proud of
It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected.
## What we learned
All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new!
## What's next for AirTunes
The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
|
## Inspiration
Curdle is loosely inspired by [5D Chess with Multiverse Time Travel](https://store.steampowered.com/app/1349230/5D_Chess_With_Multiverse_Time_Travel). More broadly, it's inspired by asking the question: **"What if I took something simple, and made it the opposite?"**
## What it does
This project is a fully-functional Wordle clone in a web-based 3D scene. Or rather, it's six fully-functional Wordle clones, each of which covers one face of a cube.
The gameplay is identical to the original game, but each face of the cube has a unique answer to guess.
## How we built it
I built Curdle from scratch using [three.js](https://github.com/mrdoob/three.js) and Javascript. The list of valid guesses and answers is pulled directly from [Wordle](https://www.nytimes.com/games/wordle/index.html), and sound effects were created in [Scale Workshop](https://sevish.com/scaleworkshop).
## Challenges we ran into
The largest roadblock I ran into was drawing an arbitrary image onto each of the cube's faces in `three.js`, a challenge which I overcame by using six individual HTML5 canvases and converting their contents to WebGL textures. Once implemented this way, it gave me an enormous amount of flexibility as to what could be rendered onto the cube.
## Accomplishments that we're proud of
This was my first hackathon, so being able to mock up, write, and publish a project like this (even if simple in nature) is something that I'm proud of. Additionally, tweaking a `three.js` scene to make it visually appealing is always a tricky task, so I'm happy to say I'm satisfied by the visuals I've been able to create.
## What we learned
Creating this project for TreeHacks taught me how important it is to put myself and my programming abilities out there. Although this is not my first toy project of this nature (both with and without `three.js`/WebGL), it is the first one I'm submitting to a hackathon or contest of any sort, and I'm eager to try more events like this in the future.
## What's next for Curdle
Let's find out!
|
winning
|
## Inspiration
As university students, we have been noticing issues with very large class sizes. With lectures often being taught to over 400 students, it becomes very difficult and anxiety-provoking to speak up when you don't understand the content. As well, with classes of this size, professors do not have time to answer every student who raises their hand. This raises the problem of professors not being able to tell if students are following the lecture, and not answering questions efficiently. Our hack addresses these issues by providing a real-time communication environment between the class and the professor. KeepUp has the potential to increase classroom efficiency and improve student experiences worldwide.
## What it does
KeepUp allows the professor to gauge the understanding of the material in real-time while providing students a platform to pose questions. It allows students to upvote questions asked by their peers that they would like to hear answered, making it easy for a professor to know which questions to prioritize.
## How We built it
KeepUp was built using JavaScript and Firebase, which provided hosting for our web app and the backend database.
## Challenges We ran into
As it was, for all of us, our first time working with a firebase database, we encountered some difficulties when it came to pulling data out of the firebase. It took a lot of work to finally get this part of the hack working which unfortunately took time away from implementing some other features (See what’s next section). But it was very rewarding to have a working backend in Firebase and we are glad we worked to overcome the challenge.
## Accomplishments that We are proud of
We are proud of creating a useful app that helps solve a problem that affects all of us. We recognized that there is a gap in between students and teachers when it comes to communication and question answering and we were able to implement a solution. We are proud of our product and its future potential and scalability.
## What We learned
We all learned a lot throughout the implementation of KeepUp. First and foremost, we got the chance to learn how to use Firebase for hosting a website and interacting with the backend database. This will prove useful to all of us in future projects. We also further developed our skills in web design.
## What's next for KeepUp
* There are several features we would like to add to KeepUp to make it more efficient in classrooms:
* Add a timeout feature so that questions disappear after 10 minutes of inactivity (10 minutes of not being upvoted)
* Adding a widget feature so that the basic information from the website can be seen in the corner of your screen at all time
* Adding Login for users for more specific individual functions. For example, a teacher can remove answered questions, or the original poster can mark their question as answered.
* Censoring of questions as they are posted, so nothing inappropriate gets through.
|
## Inspiration
We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere!
## What it does
Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site.
## How we built it
We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches.
## Challenges we ran into
## Accomplishments that we're proud of
Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time.
## What we learned
## What's next for Runway
|
## What it does
Trendcast takes sentiment analysis data of tech articles from major tech news sources along with Google Trends data to generate an overall rating over time based on popularity and overall sentiment. Machine learning is then used to predict future trends by training past data on Long Short Term Memory networks.
## How we built it
The front end was built using React and is hosted on Firebase. The back end used Python to scrape news articles from archives using Beautiful Soup, analyzed sentiment with NLTK, a natural language processing toolkit. The time series predictions were done using Keras and Tensorflow models with recurrent neural networks.
## Our challenges
It was difficult and frustrating to scrape so much data over really slow WiFi. It was also our first time trying out machine learning libraries and we were all unfamiliar with data science. Aside from some major technical challenges, the volatility of cryptocurrency prices combined with the newness of the technology made it difficult to predict future trends with a high degree of confidence.
## Accomplishments that we're proud of
We are very proud of actually implementing the idea, putting the front end together with an elegant UI, hooking everything up to Firebase, scraping and processing a ton of data and getting the prediction model to work.
## What we learned
Through this project, we learned a lot about data management and analysis, machine learning libraries such as Tensorflow, new trending technologies and full stack web development.
## What's next?
We want to add more trending technology categories and a live news feed for current topics. This could be done with a real-time news aggregating model that can collect and analyze data from major news sources and APIs automatically.
|
partial
|
## Inspiration 💡
*An address is a person's identity.*
In California, there are over 1.2 million vacant homes, yet more than 150,000 people (homeless population in California, 2019) don't have access to a stable address. Without an address, people lose access to government benefits (welfare, food stamps), healthcare, banks, jobs, and more. As the housing crisis continues to escalate and worsen throughout COVID-19, a lack of an address significantly reduces the support available to escape homelessness.
## This is Paper Homes: Connecting you with spaces so you can go places. 📃🏠
Paper Homes is a web application designed for individuals experiencing homelessness to get matched with an address donated by a property owner.
**Part 1: Donating an address**
Housing associations, real estate companies, and private donors will be our main sources of address donations. As a donor, you can sign up to donate addresses either manually or via CSV, and later view the addresses you donated and the individuals matched with them in a dashboard.
**Part 2: Receiving an address**
To mitigate security concerns and provide more accessible resources, Paper Homes will be partnering with California homeless shelters under the “Paper Homes” program. We will communicate with shelter staff to help facilitate the matching process and ensure operations run smoothly.
When signing up, a homeless individual can provide ID, however if they don’t have any forms of ID we facilitate the entire process in getting them an ID with pre-filled forms for application. Afterwards, they immediately get matched with a donated address! They can then access a dashboard with any documents (i.e. applying for a birth certificate, SSN, California ID Card, registering address with the government - all of which are free in California). During onboarding they can also set up mail forwarding ($1/year, funded by NPO grants and donations) to the homeless shelter they are associated with.
Note: We are solely providing addresses for people, not a place to live. Addresses will expire in 6 months to ensure our database is up to date with in-use addresses as well as mail forwarding, however people can choose to renew their addresses every 6 months as needed.
## How we built it 🧰
**Backend**
We built the backend in Node.js and utilized express to connect to our Firestore database. The routes were written with the Express.js framework. We used selenium and pdf editing packages to allow users to download any filled out pdf forms. Selenium was used to apply for documents on behalf of the users.
**Frontend**
We built a Node.js webpage to demo our Paper Homes platform, using React.js, HTML and CSS. The platform is made up of 2 main parts, the donor’s side and the recipient’s side. The front end includes a login/signup flow that populates and updates our Firestore database. Each side has its own dashboard. The donor side allows the user to add properties to donate and manage their properties (ie, if it is no longer vacant, see if the address is in use, etc). The recipient’s side shows the address provided to the user, steps to get any missing ID’s etc.
## Challenges we ran into 😤
There were a lot of non-technical challenges we ran into. Getting all the correct information into the website was challenging as the information we needed was spread out across the internet. In addition, it was the group’s first time using firebase, so we had some struggles getting that all set up and running. Also, some of our group members were relatively new to React so it was a learning curve to understand the workflow, routing and front end design.
## Accomplishments & what we learned 🏆
In just one weekend, we got a functional prototype of what the platform would look like. We have functional user flows for both donors and recipients that are fleshed out with good UI. The team learned a great deal about building web applications along with using firebase and React!
## What's next for Paper Homes 💭
Since our prototype is geared towards residents of California, the next step is to expand to other states! As each state has their own laws with how they deal with handing out ID and government benefits, there is still a lot of work ahead for Paper Homes!
## Ethics ⚖
In California alone, there are over 150,000 people experiencing homelessness. These people will find it significantly harder to find employment, receive government benefits, even vote without proper identification. The biggest hurdle is that many of these services are linked to an address, and since they do not have a permanent address that they can send mail to, they are locked out of these essential services. We believe that it is ethically wrong for us as a society to not act against the problem of the hole that the US government systems have put in place to make it almost impossible to escape homelessness. And this is not a small problem. An address is no longer just a location - it's now a de facto means of identification. If a person becomes homeless they are cut off from the basic services they need to recover.
People experiencing homelessness also encounter other difficulties. Getting your first piece of ID is notoriously hard because most ID’s require an existing form of ID. In California, there are new laws to help with this problem, but they are new and not widely known. While these laws do reduce the barriers to get an ID, without knowing the processes, having the right forms, and getting the right signatures from the right people, it can take over 2 years to get an ID.
Paper Homes attempts to solve these problems by providing a method for people to obtain essential pieces of ID, along with allowing people to receive a proxy address to use.
As of the 2018 census, there are 1.2 million vacant houses in California. Our platform allows for donors with vacant properties to allow people experiencing homelessness to put down their address to receive government benefits and other necessities that we take for granted. With the donated address, we set up mail forwarding with USPS to forward their mail from this donated address to a homeless shelter near them.
With proper identification and a permanent address, people experiencing homelessness can now vote, apply for government benefits, and apply for jobs, greatly increasing their chance of finding stability and recovering from this period of instability
Paper Homes unlocks access to the services needed to recover from homelessness. They will be able to open a bank account, receive mail, see a doctor, use libraries, get benefits, and apply for jobs.
However, we recognize the need to protect a person’s data and acknowledge that the use of an online platform makes this difficult. Additionally, while over 80% of people experiencing homelessness have access to a smartphone, access to this platform is still somewhat limited. Nevertheless, we believe that a free and highly effective platform could bring a large amount of benefit. So long that we prioritize the needs of a person experiencing homelessness first, we will able to greatly help them rather than harming them.
There are some ethical considerations that still need to be explored:
We must ensure that each user’s information security and confidentiality are of the highest importance. Given that we will be storing sensitive and confidential information about the user’s identity, this is top of mind. Without it, the benefit that our platform provides is offset by the damage to their security. Therefore, we will be keeping user data 100% confidential when receiving and storing by using hashing techniques, encryption, etc.
Secondly, as mentioned previously, while this will unlock access to services needed to recover from homelessness, there are some segments of the overall population that will not be able to access these services due to limited access to the internet. While we currently have focused the product on California, US where access to the internet is relatively high (80% of people facing homelessness have access to a smartphone and free wifi is common), there are other states and countries that are limited.
In addition to the ideas mentioned above, some next steps would be to design a proper user and donor consent form and agreement that both supports users’ rights and removes any concern about the confidentiality of the data. Our goal is to provide means for people facing homelessness to receive the resources they need to recover and thus should be as transparent as possible.
## Sources
[1](https://www.cnet.com/news/homeless-not-phoneless-askizzy-app-saving-societys-forgotten-smartphone-tech-users/#:%7E:text=%22Ninety%2Dfive%20percent%20of%20people,have%20smartphones%2C%22%20said%20Spriggs)
[2](https://calmatters.org/explainers/californias-homelessness-crisis-explained/)
[3](https://calmatters.org/housing/2020/03/vacancy-fines-california-housing-crisis-homeless/)
|
## Inspiration
There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive.
## What it does
Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily.
## How we built it
Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging.
## Challenges we ran into
Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user.
## Accomplishments that we're proud of
Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking.
## What we learned
Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs.
## What's next for Strive - Your Personal AI Speech Trainer
* Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API).
* Ability to calculate more performance variables for a even better analysis and more detailed feedback
|
# *Promptu*
## What is Promptu?
**Promptu** is a social media platform based entirely on spontaneous voice messages that answer daily engaging and fun prompts.
We created **Promptu** to eliminate the pressure of traditional social media and enable more natural online presence. We seek to create a place for users to have fun, genuine, and impromptu interactions that imitate in-person banter.
## 💡 Inspiration
Humans spend between 32% and 75% of their waking time in social interactions and on average 2 hours 27 minutes using social media. Clearly a significant portion of social interactions is occurring online hence the abundance of different unique social media platforms. One of the great things social media allows us to do is bridge the distance with people we often wouldn't be able to see face to face. However, the way these platforms are designed puts a lot of pressure on building your digital profile, and they all rely on image and text as a medium of communication. These mediums are effective but also limit how genuine the online interactions are. Think about last time you posted on social media - how much time did you spent fixing your post and considering how people will react to every photo you post or sentence you write? That is not how in-person communications happen, you don't have time to think about your answers thoroughly - you're forced to be genuine and that's what makes them real and fun. We would like to see more of that in the digital space.
This is why our way of "taking inspiration from the past and add a twist to it" is to completely rethink the whole dynamics of social media interactions from the ground up. To make those digital interactions more human-like, less serious, and more fun. Our solution is a social media platform based entirely around short spontaneous voice posts following unique prompts that change daily. Allowing users to interact with their friends in a way more similar to how they do in person, by cracking jokes, making funny comments about what is around us, and sharing laughs!
## 👩🏻💻 How We Built It
Our stack includes the following:
* Frontend: React.js
* Backend: Firebase (Google Cloud Functions, Cloud Storage, Firestore), JavaScript
* UX Design: Figma
## 🐛 Challenges We Ran Into
Originally, we wanted to develop our frontend using React Native but we were unable to set it up on our machines. We decided to use React after a few hours of debugging React Native.
## 👏 Accomplishments That We're Proud Of
We are proud of our team for working so well together and enjoying the process of creating an app within 24 hours. We were able to make an application wireframe, develop the entire frontend in React, and set up the backend and user authentication in Firebase.
## 🔖 What We Learned
This was the first hackathon for 2 of our team members, so it was very exciting to ideate a software product and then immediately oversee the entire development for the first time. Some technical learning we did included how to authenticate users using Firebase, how to record and listen to audio files using React, and how to use the Firebase and Firestore SDKs.
## 🌎 What's Next for Promptu
We want to add a feature for adding and searching friends. We also want to sort the feed in the future by most popular (quantified by highest number of smiles - our version of Reddit upvotes).
We also want to incorporate NLP to detect inappropriate audio messages that are uploaded by users. This will make sure that our app remains a fun, safe, and welcoming space for all.
## 🤝 Parting Words
Thank you so much to HackHarvard for this great experience. Our whole team became very close, and had an engaging and educational time! We are excited to continue iterating on Promptu and hopefully launch to the iOS and Android app stores.
|
winning
|
## What it does
XEN SPACE is an interactive web-based game that incorporates emotion recognition technology and the Leap motion controller to create an immersive emotional experience that will pave the way for the future gaming industry.
# How we built it
We built it using three.js, Leap Motion Controller for controls, and Indico Facial Emotion API. We also used Blender, Cinema4D, Adobe Photoshop, and Sketch for all graphical assets.
|
## Inspiration
Our mission is to foster a **culture of understanding**. A culture where people of diverse backgrounds get to truly *connect* with each other. But, how can we reduce the barriers that exists today and make the world more inclusive?
Our solution is to bridge the communication gap of **people with different races and cultures** and **people of different physical abilities**.
## What we built
In 36 hours, we created a mixed reality app that allows everyone in the conversation to communicate using their most comfortable method:
You want to communicate using your mother tongue?
Your friend wants to communicate using sign language?
Your aunt is hard of hearing and she wants to communicate without that back-and-forth frustration?
Our app enables everyone to do that.
## How we built it
VRbind takes in speech and coverts it into text using Bing Speech API. Internally, that text is then translated into your mother tongue language using Google Translate API, and given out as speech back to the user through the built-in speaker on Oculus Rift. Additionally, we also provide a platform where the user can communicate using sign language. This is detected using the leap motion controller and interpreted as an English text. Similarly, the text is then translated into your mother tongue language and given out as speech to Oculus Rift.
## Challenges we ran into
We are running our program in Unity, therefore the challenge is in converting all our APIs into C#.
## Accomplishments that we are proud of
We are proud that we were able complete with all the essential feature that we intended to implement and troubleshoot the problems that we had successfully throughout the competition.
## What we learned
We learn how to code in C# as well as how to select, implement, and integrate different APIs onto the unity platform.
## What's next for VRbind
Facial, voice, and body language emotional analysis of the person that you are speaking with.
|
## Inspiration
We hope this app encourages users to invest in the stocks and grow their personal investments.
## What it does
When the user sees a logo in everyday life and wants to learn more about the financial performance of said company in the stock exchange, the user can simply perform the airtap gesture and the HoloLens will take a snapshot of the current view. Then, the image is sent to Google’s Cloud Vision API and analyzed to see if there are any logos in the picture. If a logo is detected, the company of the logo is found and the NASDAQ API is used to determine the recent performance of the company’s stock. Finally, this financial data is visualized on the HoloLens through Unity.
## How we built it
We had the HoloLens take a picture after receiving an airtap gesture. The picture is then run through a chain of APIs, specifically Google Vision (to detect logos), a company name .csv (to get ticker names from company names), and NASDAQ API (to check stock prices).
## Challenges we ran into
We had challenges in implementing a gesture to capture photos, capturing photos, sending a JSON request through Unity to the Cloud Vision API, parsing XML from the NASDAQ API, and scripting in Unity. Because the HoloLens hasn’t been developed on much yet, there was little documentation and examples to learn from.
## Accomplishments that we're proud of
Becoming familiarized with developing for an AR environment, using Unity and the Windows Holographic Platform. Figuring out the Cloud Vision image recognition API to detect images from the snapshot the user takes on the HoloLens.
## What we learned
AR/Mixed Reality is an emerging field and devices like the HoloLens have incredible potential. It was a great learning experience to work with the HoloLens and figure out how to use Unity
## What's next for Logo Lens
1) By incorporating Capital One API, we could simulate buying and selling stocks on the go.
2) Selecting what pieces of data are more helpful to the user and presenting them using clean and easy to understand visualizations.
|
winning
|
## Inspiration
The desire to learn libGDX!
## What it does
Plays like a pretty new and simple version of the original Mario
## How I built it
libGDX!
## Challenges I ran into
Well, I didn't know much about the software, so it took some reading of documentation and a lot of tutorials.
## Accomplishments that I'm proud of
It's crash free and a path to the creation of new software
## What I learned
libGDX!
## What's next for libgdx-totally-original-plumber-game
I'll probably write out more features than even the reference version has, and then move on to a totally original project!
|
# Valence — an Intelligent Journal
## Inspiration
My teammate and I both actively keep journals, because we love reflecting on our past experiences to relive the emotions of days gone by. At the same time, we are cognizant of how important it is for people — especially students! — to make sure that they avoid depressive slumps. To this end, we decided to build Valence to try and help users keep track of how their moods progress over time.
## What is Valence?
Valence is a journaling program that integrates with Google Cloud’s Natural Language API to perform complex sentiment analysis on a user’s journal entries. It then displays the results in a visually appealing format, so that the user can reflect on what led to their days being better or worse. Valence is a desktop application with an accompanying data explorer web app, built to allow users to track their journal entries on the go.
## Implementation
We coded the desktop app in Python, using the tkinter library to construct a GUI and the Google Cloud API to perform NLP calculations. The web app and API were constructed using MongoDB, Express, React and Node.js.
## Challenges
We have had difficulty connecting our desktop and web apps, and this will remain an important point to focus on as we continue to push for seamless integration.
## For the future
We will continue to work on Valence to improve its cross-platform functionality and implement additional features as we discover more use cases.
## Built with
Python, MongoDB, Express, React, Node.js, Google Cloud
|
## Inspiration
hiding
## What it does
## How I built it
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for Spine
|
losing
|
# InstaQuote
InstaQuote is an SMS based service that allows users to get a new car insurance quote without the hassle of calling their insurance provider and waiting in a long queue.
# What Inspired You
We wanted a more convenient way to get a quote on auto-insurance in the event of a change within your driver profile (i.e. demerit point change, license class increase, new car make, etc...)
Since insurance rates are not something that change often we found it appropriate to create an SMS based service, thus saving the hassle of installing an app that would rarely be used as well as the time of calling your insurance provider to get a simple quote. As a company, this service would be useful for clients because it gives them peace of mind that there is an overarching service which can be texted anytime for an instant quote.
# What We Learned
We learned how to connect API's using Standard Library and we also learned JavaScript. Additionally, we learned how to use backend databases to store information and manipulate that data within the database.
# Challenges We Faced
We had some trouble with understanding and getting used to JavaScript syntax
|
## Inspiration:
Our inspiration for this app comes from the critical need to improve road safety and assess driver competence, especially under various road conditions. The alarming statistics on road accidents and fatalities, including those caused by distracted driving and poor road conditions, highlight the urgency of addressing this issue. We were inspired to create a solution that leverages technology to enhance driver competence and reduce accidents.
## What it does
Our app has a frontend, which connects to a GPS signal, which tracks the acceleration of a given car, as well as its speed. Such a React frontend also encompasses a Map, as well as a record feature, which, through the implementation of a LLM by Cohere, is capable of detecting alerting police, in the event of any speech that may be violent, or hateful, given road conditions.
On the backend, we have numerous algorithms and computer vision, that were fine-tuned upon YOLOv5 and YOLOv8. These models take in an image through a camera feed, surrounding cars, the color of the surrounding traffic lights, and the size of the car plates in front of the drivers.
By detecting car plates, we are able to infer the acceleration of a car (based on the change in size of the car plates), and are able to asses the driver's habits. By checking for red lights, correlated with the GPS data, we are able to determine a driver's reaction time, and can give a rating for a driver's capacities.
Finally, an eye-tracking model is able to determine a driver's concentration, and focus on the road.
All this paired with its interactive mobile app makes our app the ultimate replacement for any classic dashcam, and protects the driver from the road's hazards.
|
## Inspiration
We love the playing the game and were disappointed in the way that there wasnt a nice web implementation of the game that we could play with each other remotely. So we fixed that.
## What it does
Allows between 5 and 10 players to play Avalon over the web app.
## How we built it
We made extensive use of Meteor and forked a popular game called [Spyfall](https://github.com/evanbrumley/spyfall) to build it out. This game had a very basic subset of rules that were applicable to Avalon. Because of this we added a lot of the functionality we needed on top of Spyfall to make the Avalon game mechanics work.
## Challenges we ran into
Building realtime systems is hard. Moreover, using a framework like Meteor that makes a lot of things easy by black boxing them is also difficult by the same token. So a lot of the time we struggled with making things work that happened to not be able to work within the context of the framework we were using. We also ended up starting the project over again multiple times since we realized that we were going down a path in which it was impossible to build that application.
## Accomplishments that we're proud of
It works. Its crisp. Its clean. Its responsive. Its synchronized across clients.
## What we learned
Meteor is magic. We learned how to use a lot of the more magical client synchronization features to deal with race conditions and the difficulties of making a realtime application.
## What's next for Avalon
Fill out the different roles, add a chat client, integrate with a video chat feature.
|
winning
|
## Inspiration
We set out to build a product that solves two core pain points in our daily lives: 1) figuring out what to do for every meal 😋 and 2) maintaining personal relationships 👥.
As college students, we find ourselves on a daily basis asking the question, “What should I do for lunch today?” 🍔 — many times with a little less than an hour left before it’s time to eat. The decision process usually involves determining if one has the willpower to cook at home, and if not, figuring out where to eat out and if there is anyone to eat out with. For us, this usually just ends up being our roommates, and we find ourselves quite challenged by maintaining depth of relationships with people we want to because the context windows are too large to juggle.
Enter, BiteBuddy.
## What it does
We divide the problem we’re solving into two main scenarios.
1. **Spontaneous (Eat Now!)**: It’s 12PM and Jason realizes that he doesn’t have lunch plans. BiteBuddy will help him make some! 🍱
2. **Futuristic (Schedule Ahead!)**: It’s Friday night and Parth decides that he wants to plan out his entire next week (Forkable, anyone?). 🕒
**Eat Now** allows you to find friends that are near you and automatically suggests nearby restaurants that would be amenable to both of you based on dietary and financial considerations. Read more below to learn some of the cool API interactions and ML behind this :’). 🗺️
**Schedule Ahead** allows you to plan your week ahead and actually think about personal relationships. It analyzes closeness between friends, how long it’s been since you last hung out, looks at calendars, and similar to above automatically suggests time and restaurants. Read more below for how! 🧠
We also offer a variety of other features to support the core experience:
1. **Feed**. View a streaming feed of the places your friends have been going. Enhance the social aspect of the network.
2. **Friends** (no, we don’t offer friends). Manage your relationships in a centralized way and view LLM-generated insights regarding relationships and when might be the right time/how to rekindle them.
## How we built it
The entire stack we used for this project was Python, with the full stack web development being enabled by the **Reflex** Python package, and database being Firebase.
**Eat Now** is a feature that bases itself around geolocation, dietary preferences, financial preferences, calendar availability, and LLM recommendation systems. We take your location, go through your friends list and find the friends who are near you and don’t have immediate conflicts on their calendar, compute an intersection of possible restaurants via the Yelp API that would be within a certain radius of both of you, filter this intersection with dietary + financial preferences (vegetarian? vegan? cheap?), then pass all our user context into a LLAMA-13B-Chat 💬 to generate a final recommendation. This recommendation surfaces itself as a potential invite (in figures above) that the user can choose whether or not to send to another person. If they accept, a calendar invite is automatically generated.
**Schedule Ahead** is a feature that bases itself around graph machine learning, calendar availability, personal relationship status (how close are y’all? When is the last time you saw each other?), dietary/financial preferences, and more. By looking ahead into the future, we take the time to look through our social network graph with associated metadata and infer relationships via Spectral Clustering 📊. Based on how long it’s been since you last hung out and the strength of your relationship, it will surface who to meet with as a priority queue and look at both calendars to determine mutually available times and locations with the same LLM.
We use retrieval augmented generation (RAG) 📝 throughout our app to power personalized friend insights (to learn more about which friends you should catch up with, learn that Jason is a foodie, and what cuisines you and Parth like). This method is also a part of our recommendation algorithm.
## Challenges we ran into
1. **Dealing with APIs.** We utilized a number of APIs to provide a level of granularity and practicality to this project, rather than something that’s solely a mockup. Dealing with APIs though comes with its own issues. The Yelp API, for example, continuously rate limited us even though we cycled through keys from all of our developer accounts :’). The Google Calendar API required a lot of exploration with refresh tokens, necessary scopes, managing state with google auth, etc.
2. **New Technologies.** We challenged ourselves by exploring some new technologies as a part of our stack to complete this project. Graph ML for example was a technology we hadn’t worked with much before, and we quickly ran into the cold start problem with meaningless graphs and unintuitive relationships. Reflex was another new technology that we used to complete our frontend and backend entirely in Python. None of us had ever even pip installed this package before, so learning how to work with it and then turn it into something complex and useful was a fun challenge. 💡
3. **Latency.** Because our app queries several APIs, we had to make our code as performant as possible, utilize concurrency where possible, and add caching for frequently-queried endpoints. 🖥️
## Accomplishments that we're proud of
The amount of complexity that we were able to introduce into this project made it mimic real-life as close as possible, which is something we’re very proud of. We’re also proud of all the new technologies and Machine Learning methods we were able to use to develop a product that would be most beneficial to end users.
## What we learned
This project was an incredible learning experience for our team as we took on multiple technically complex challenges to reach our ending solution -- something we all thought that we had a potential to use ourselves.
## What's next for BiteBuddy
The cool thing about this project was that there were a hundred more features we wanted to include but didn’t remotely have the time to implement. Here are some of our favorites 🙂:
1. **Groups.** Social circles often revolve around groups. Enabling the formation of groups on the app would give us more metadata information regarding the relationships between people, lending itself to improved GNN algorithms and recommendations, and improve the stickiness of the product by introducing network effects.
2. **New Intros: Extending to the Mutuals.** We’ve built a wonderful graph of relationships that includes metadata not super common to a social network. Why not leverage this to generate introductions and form new relationships between people?
3. **More Integrations.** Why use DonutBot when you can have BiteBuddy?
## Built with
Python, Reflex, Firebase, Together AI, ❤️, and boba 🧋
|
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time
## The Boring Stuff (Intro)
* Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit"
* Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30
* Matthew Yang - ML specialist - aka "What is an API"
## What it does
It's a cross-platform app that can promote mental health and healthier eating habits!
* Log when you eat healthy food.
* Feed your "munch buddies" and level them up!
* Learn about the different types of nutrients, what they do, and which foods contain them.
Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge!
Specifically,
>
> Many of us underestimate how important eating and mental health are to our overall wellness.
>
>
>
That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences..
>
> We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.**
>
>
>
Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that!
Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;)
## How we built it
The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses.
## Challenges we ran into
Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning!
There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch.
But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps.
## Accomplishments that we're proud of
As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college.
Above all, it was awesome that we got to create something meaningful (at least, to us).
## What we learned
We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :)
I also learned that the power of midnight doordash credits is akin to magic.
## What's next for Munch Buddies!
We have so many cool ideas that we just didn't have the technical chops to implement in time
* customizing your munch buddies!
* advanced data analysis on your food history (data science is my specialty)
* exporting your munch buddies and stats!
However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness.
## Final Remark
One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
|
## Inspiration
We, as college students, are everyday facing the problem of having to text lots of people in order to ask them to go to the dining hall together, and we are all feeling that we might annoy the other person. In the times of COVID-19, local restaurants are economically suffering as more and more people opt to cook their own meals. However, Lezeat is the solution! With the Lezeat you can quickly see who is able to go for a lunch, and arrange a location and time just in one click. Not only that you will be able to gain valuable friendships, but also help the growth of local restaurants community. Furthermore, we plan on integrating a “meat someone” feature, where you can grab a meal with people outside of your friend group but within your community (school/place of employment).
## What we learned
During the research we found out that the restaurant industry has grown dramatically over the last few decades, going from a total sales amount of $379 billion in 2000 to $798.7 billion in 2017! Additionally, Americans have been continuing to budget more and more of their money towards eating out, which is a positive for Lezeat. Various fast-food restaurants such as Chipotle and McDonald’s posted 10% and 5.7% growth in sales in Q2 of 2019. Furthermore, we further developed our teamwork and programming skills.
## How did we build?
After brainstorming various ideas, we got to the work and divided ourselves into the groups. Not only that we finished whole prototype in Figma and wrote Software Requirements Specification on Overleaf, but we also have coded a big chunk of the application in React Native and Node.js . Furthermore, we built the logo, explored the market, and potential competitors.
## Challenges we faced
Filtering out the ideas
|
partial
|
## Inspiration
My friend and I needed to find an apartment in New York City during the Summer. We found it very difficult to look through multiple listing pages at once so we thought to make a bot to suggest apartments would be helpful. However, we did not stop there. We realized that we could also use Machine Learning so the bot would learn what we like and suggest better apartments. That is why we decided to do RealtyAI
## What it does
It is a facebook messenger bot that allows people to search through airbnb listings while learning what each user wants. By giving feedback to the bot, we learn your **general style** and thus we are able to recommend the apartments that you are going to like, under your budget, in any city of the world :) We can also book the apartment for you.
## How I built it
Our app used a flask app as a backend and facebook messenger to communicate with the user. The facebook bot was powered by api.ai and the ML was done on the backend with sklearn's Naive Bayes Classifier.
## Challenges I ran into
Our biggest challenge was using python's sql orm to store our data. In general, integrating the many libraries we used was quite challenging.
The next challenge we faced was time, our application was slow and timing out on multiple requests. So we implemented an in-memory cache of all the requests but most importantly we modified the design of the code to make it multi-threaded.
## Accomplishments that I'm proud of
Our workflow was very effective. Using Heroku, every commit to master immediately deployed on the server saving us a lot of time. In addition, we all managed the repo well and had few merge conflicts. We all used a shared database on AWS RDS which saved us a lot of database scheme migration nightmares.
## What I learned
We learned how to use python in depth with integration with MySQL and Sklearn. We also discovered how to spawn a database with AWS. We also learned how to save classifiers to the database and reload them.
## What's next for Virtual Real Estate Agent
If we win hopefully someone will invest! Can be used by companies for automatic accommodations for people having interviews. But only by individuals how just want to find the best apartment for their own style!
|
## What it does
InternetVane is an IOT wind/weather vane. Set and calibrate the InternetVane and the arrow will point in the real-time direction of the wind at your location.
First, the arrow is calibrated to the direction of the magnetometer. Then, the GRPS Shield works with twilio and google maps geo location api to retrieve the GPS location of the device. Using the location, the EarthNetworks API delivers real-time wind information to InternetVane . Then, the correct, IRL direction of the wind is calculated. The arrow's orientation is updated and recalculated every 30 seconds to provide the user with the most accurate visualization possible.
## How we built it
We used an Arduino UNO with a GPRS Shield, twillio functions, google maps geo-location api, magnetometer, stepper motor, and other various hardware.
The elegant, laser cut enclosure is the cherry ontop!
## Challenges we ran into
We were rather unfamiliar with laser cutting and 3d printing. There were many trials and errors to get the perfect, fitted enclosure.
Quite a bit of time was spent deciding how to calibrate the device. The arrow needs to align with the magnetometer before moving to the appropriate direction. Many hardware options were considered before deciding on the specific choice of switch.
|
## What inspired us:
The pandemic has changed the university norm to being primarily all online courses, increasing our usage and dependency on textbooks and course notes. Since we are all computer science students, we have many math courses with several definitions and theorems to memorize. When listening to a professor’s lecture, we often forget certain theorems that are being referred to. With discussAI, we are easily able to query the postgresql database with a command and receive an image from the textbook explaining what the definition/theorem is. Thus, we decided to use our knowledge with machine learning libraries to filter out these pieces of information.
We believe that our program’s concept can be applied to other fields, outside of education. For instance, business meetings or training sessions can utilize these tools to effectively summarize long manuals and to search for keywords.
## What we learned:
We had a lot of fun building this application since we were new to using Microsoft Azure applications. We learned how to integrate machine learning libraries such as OCR and sklearn for processing our information, and we deepened our knowledge in frontend (Angular.js) and backend(Django and Postgres).
## How we built it:
We built our web application’s frontend using Angular.js to build our components and Agora.io to allow video conferencing. On our backend, we used Django and Postgresql for handling API requests from our frontend. We also used several Python libraries to convert the pdf file to png images, utilize Azure OCR to analyze these text images, apply the sklearn library to analyze the individual text, and finally crop the images to return specific snippets of definitions/theorems.
## Challenges we faced:
The most challenging part was deciding our ML algorithm to derive specific image snippets from lengthy textbooks. Some other challenges we faced varies from importing images from Azure Storage to positioning CSS components. Nevertheless, the learning experience was amazing with the help of mentors, and we hope to participate again in the future!
|
partial
|
## Lejr
**Introduction**
A web application that allows you to track how much money your friends owe you, and after your friend accepts your request of paying you back, the app will directly deposit the money into your bank account.
**How we did it**
We built the website using Interac's Public API and MongoDB hosted by MLab; the website is hosted on Heroku. Our Node.js/Express backend is also acting as a REST API for our Android application.
**Inspiration**
Our friends keep forgetting to pay us back, and we're uncomfortable with pestering them so we thought of the idea to make payment requests simple and quick by using the Interac API along with a Node.js backend.
|
We were inspired by the daily struggle of social isolation.
Shows the emotion of a text message on Facebook
We built this using Javascript, IBM-Watson NLP API, Python https server, and jQuery.
Accessing the message string was a lot more challenging than initially anticipated.
Finding the correct API for our needs and updating in real time also posed challenges.
The fact that we have a fully working final product.
How to interface JavaScript with Python backend, and manually scrape a templated HTML doc for specific key words in specific locations
Incorporate the ability to display alternative messages after a user types their initial response.
|
## Inspiration
The inspiration for this project was drawn from the daily experiences of our team members. As post-secondary students, we often make purchases for our peers for convenience, yet forget to follow up. This can lead to disagreements and accountability issues. Thus, we came up with the idea of CashDat, to alleviate this commonly faced issue. People will no longer have to remind their friends about paying them back! With the available API’s, we realized that we could create an application to directly tackle this problem.
## What it does
CashDat is an application available on the iOS platform that allows users to keep track of who owes them money, as well as who they owe money to. Users are able to scan their receipts, divide the costs with other people, and send requests for e-transfer.
## How we built it
We used Xcode to program a multi-view app and implement all the screens/features necessary.
We used Python and Optical Character Recognition (OCR) built inside Google Cloud Vision API to implement text extraction using AI on the cloud. This was used specifically to draw item names and prices from the scanned receipts.
We used Google Firebase to store user login information, receipt images, as well as recorded transactions and transaction details.
Figma was utilized to design the front-end mobile interface that users interact with. The application itself was primarily developed with Swift with focus on iOS support.
## Challenges we ran into
We found that we had a lot of great ideas for utilizing sponsor APIs, but due to time constraints we were unable to fully implement them.
The main challenge was incorporating the Request Money option with the Interac API into our application and Swift code. We found that since the API was in BETA made it difficult to implement it onto an IOS app. We certainly hope to work on the implementation of the Interac API as it is a crucial part of our product.
## Accomplishments that we're proud of
Overall, our team was able to develop a functioning application and were able to use new APIs provided by sponsors. We used modern design elements and integrated that with the software.
## What we learned
We learned about implementing different APIs and overall IOS development. We also had very little experience with flask backend deployment process. This proved to be quite difficult at first, but we learned about setting up environment variables and off-site server setup.
## What's next for CashDat
We see a great opportunity for the further development of CashDat as it helps streamline the process of current payment methods. We plan on continuing to develop this application to further optimize user experience.
|
partial
|
## Inspiration
I wanted to try to do something on my own this time, so I went solo and tried to learn as much as I could.
## What it does
An Android app that you can talk to and it will examine the tone/emotions in your message. Then it will recommend a movie for you to watch based on the results of your emotions
## How I built it
Using Android, IBM Watson's Tone Analyzer API and TheMovieDB's API, I integrated all of it together along with speech to text to make the app.
## Challenges I ran into
The biggest challenge was communicating with the APIs but eventually with trial and error, I figured it out.
## Accomplishments that I'm proud of
The fact that I went solo on this project, I was very proud of myself for learning new things by myself.
## What I learned
How to communicate with various APIs as well as the importance of time management.
## What's next for MovieEmotion
Maybe to do something after recommending a movie such as linking it to the show on Netflix or in theaters, but there was no API good enough for this function right now.
|
## What it does
GroupEmote is a video analyzing technology that takes a video and outputs information about faces detected in the conversation and analyzes their tone sentiment.
## How we built it
We used OpenCV to build face detection in a video, which shows a box and an id for each face that it recognizes. We also used IBM Watson tone analyzer and text-to-speech technology to detect what the people are saying in the video and output the transcript of the video as well as the top 3 tones recognized in each person.
## Challenges we ran into / What we learned
We originally wanted GroupEmote to analyze video in a live stream format, and to do so we tried to integrate the video sharing web app OpenTok (based on WebRTC), since Google Hangouts API is no longer supported. However, OpenTok's API was not only hard to understand but also did not allow the function we needed to pull the audio data from the live video call; therefore we had to pivot to analyzing recorded video.
OpenCV also caused many build problems and didn't have a pre-trained emotion recognition model. As a result we had to find a large data set on our own and ran out of time to train it ourselves with supervised learning. At one point, we tried to look into the Microsoft Emotion API instead of OpenCV but realized that didn't allow direct file streams and had a very slow pull rate, so we had to abandon that idea as well.
While accurate, the slow HackMIT wifi and IBM Watson latency made matching the tone analysis and video a little difficult.
Different members on our team also were using different platforms (OSX, Windows, Linux) which caused certain Python libraries to not work for certain members.
## What's next for GroupEmote
We want to find a way to integrate our existing technology into a live video sharing web app; our ultimate vision is to create a video conferencing app that will look at each person, detect their emotion through their image as well as their tone through what the say, then analyze that data to aid activities like interviewing, or for managers to better understand meetings with a diversity of team member personalities and socially impaired people to visually understand social cues. It could also be used as a trigger system; a large difference in facial emotion would trigger an audio recording and subsequent tone analysis, therefore providing a more niche data analysis and minimizing the amount of data being processed.
We also want to make our user interface more accessible/aesthetic :)
|
### 🌟 Inspiration
We're inspired by the idea that emotions run deeper than a simple 'sad' or 'uplifting.' Our project was born from the realization that personalization is the key to managing emotional states effectively.
### 🤯🔍 What it does?
Our solution is an innovative platform that harnesses the power of AI and emotion recognition to create personalized Spotify playlists. It begins by analyzing a user's emotions, both from facial expressions and text input, to understand their current state of mind. We then use this emotional data, along with the user's music preferences, to curate a Spotify playlist that's tailored to their unique emotional needs.
What sets our solution apart is its ability to go beyond simplistic mood categorizations like 'happy' or 'sad.' We understand that emotions are nuanced, and our deep-thought algorithms ensure that the playlist doesn't worsen the user's emotional state but, rather, optimizes it. This means the music is not just a random collection; it's a therapeutic selection that can help users manage their emotions more effectively.
It's music therapy reimagined for the digital age, offering a new and more profound dimension in emotional support.
### 💡🛠💎 How we built it?
We crafted our project by combining advanced technologies and teamwork. We used Flask, Python, React, and TypeScript for the backend and frontend, alongside the Spotify and OpenAI APIs.
Our biggest challenge was integrating the Spotify API. When we faced issues with an existing wrapper, we created a custom solution to overcome the hurdle.
Throughout the process, our close collaboration allowed us to seamlessly blend emotion recognition, music curation, and user-friendly design, resulting in a platform that enhances emotional well-being through personalized music.
### 🧩🤔💡 Challenges we ran into
🔌 API Integration Complexities: We grappled with integrating and harmonizing multiple APIs.
🎭 Emotion Recognition Precision: Achieving high accuracy in emotion recognition was demanding.
📚 Algorithm Development: Crafting deep-thought algorithms required continuous refinement.
🌐 Cross-Platform Compatibility: Ensuring seamless functionality across devices was a technical challenge.
🔑 Custom Authorization Wrapper: Building a custom solution for Spotify API's authorization proved to be a major hurdle.
### 🏆🥇🎉 Accomplishments that we're proud of
#### Competition Win: 🥇
```
Our victory validates the effectiveness of our innovative project.
```
#### Functional Success: ✔️
```
The platform works seamlessly, delivering on its promise.
```
#### Overcoming Challenges: 🚀
```
Resilience in tackling API complexities and refining algorithms.
```
#### Cross-Platform Success: 🌐
```
Ensured a consistent experience across diverse devices.
```
#### Innovative Solutions: 🚧
```
Developed custom solutions, showcasing adaptability.
```
#### Positive User Impact: 🌟
```
Affirmed our platform's genuine enhancement of emotional well-being.
```
### 🧐📈🔎 What we learned
🛠 Tech Skills: We deepened our technical proficiency.
🤝 Teamwork: Collaboration and communication were key.
🚧 Problem Solving: Challenges pushed us to find innovative solutions.
🌟 User Focus: User feedback guided our development.
🚀 Innovation: We embraced creative thinking.
🌐 Global Impact: Technology can positively impact lives worldwide.
### 🌟👥🚀 What's next for Look 'n Listen
🚀 Scaling Up: Making our platform accessible to more users.
🔄 User Feedback: Continuous improvement based on user input.
🧠 Advanced AI: Integrating more advanced AI for better emotion understanding.
🎵 Enhanced Personalization: Tailoring the music therapy experience even more.
🤝 Partnerships: Collaborating with mental health professionals.
💻 Accessibility: Extending our platform to various devices and platforms.
|
losing
|
## Inspiration
The Materials Engineering Department from McMaster University proposed a challenge to the DeltaHacks Attendees - to analyze microscopic grain structures from metals. The program should be able to distinguish the grain boundaries and display information about the 3 types of grains in the image (light, dark, and lam).
## What it does
The user interfaces with our program with our GUI. They drag and drop a file into the GUI and run the program. It will output a mask for each of the following: grain boundaries, precipitates, light grains, dark grains, and lam grains. The user can scroll through and save the masks. Information about each type grain is included in a table - average grain area, average grain length, and number of grains of that type.
## How we built it
We built the GUI using pyqt.
The majority of the image processing algorithms were programmed in python using opencv. We performed processing steps such as thresholding, condensing, flattening, and expanding boarders. The curtaining effect was removed by implementing a Fourier transform and then removing the frequency of the curtaining from the image. The light flares were reduced by sampling the image to obtain a background level and subtracting that from the original image.
## Challenges we ran into
The noise caused by the precipitates were the largest challenge we faced as the noise that resulted from them in each processing step impacted our ability to extract information from the images. We had to determine how to remove the precipitates from the images early in the processing procedure.
## Accomplishments that we're proud of
We are proud that we accomplished something for each task.
## What we learned
Theoretical image processing techniques do not work well on real images due to noise and other artifacts.
We found an actual application for Fourier transforms.
## What's next for Materialistic
The next steps would be to fine tune our algorithms so that they work a bit more efficiently. Additionally, we would like to further automate our procedure and improve the functionality by implementing a neural network. Given the data sets that were provided, the results of our image segmentation would have been much more sophisticated if we had the time to train the neural network to recognize the periodic frequency of the curtaining artifacts in the image. ML would be able to optimize the pattern recognition of the curtaining, resulting in a more accurate Fourier transform model to remove the specified frequency content from the image.
|
## Inspiration
During my last internship, I worked on an aging product with numerous security vulnerabilities, but identifying and fixing these issues was a major challenge. One of my key projects was to implement CodeQL scanning to better locate vulnerabilities. While setting up CodeQL wasn't overly complex, it became repetitive as I had to manually configure it for every repository, identifying languages and creating YAML files. Fixing the issues proved even more difficult as many of the vulnerabilities were obscure, requiring extensive research and troubleshooting. With that experience in mind, I wanted to create a tool that could automate this process, making code security more accessible and ultimately improving internet safety
## What it does
AutoLock automates the security of your GitHub repositories. First, you select a repository and hit install, which triggers a pull request with a GitHub Actions configuration to scan for vulnerabilities and perform AI-driven analysis. Next, you select which vulnerabilities to fix, and AutoLock opens another pull request with the necessary code modifications to address the issues.
## How I built it
I built AutoLock using Svelte for the frontend and Go for the backend. The backend leverages the Gin framework and Gorm ORM for smooth API interactions, while the frontend is powered by Svelte and styled using Flowbite.
## Challenges we ran into
One of the biggest challenges was navigating GitHub's app permissions. Understanding which permissions were needed and ensuring the app was correctly installed for both the user and their repositories took some time. Initially, I struggled to figure out why I couldn't access the repos even with the right permissions.
## Accomplishments that we're proud of
I'm incredibly proud of the scope of this project, especially since I developed it solo. The user interface is one of the best I've ever created—responsive, modern, and dynamic—all of which were challenges for me in the past. I'm also proud of the growth I experienced working with Go, as I had very little experience with it when I started.
## What we learned
While the unstable CalHacks WiFi made deployment tricky (basically impossible, terraform kept failing due to network issues 😅), I gained valuable knowledge about working with frontend component libraries, Go's Gin framework, and Gorm ORM. I also learned a lot about integrating with third-party services and navigating the complexities of their APIs.
## What's next for AutoLock
I see huge potential for AutoLock as a startup. There's a growing need for automated code security tools, and I believe AutoLock's ability to simplify the process could make it highly successful and beneficial for developers across the web.
|
## What it does
Take a picture, get a 3D print of it!
## Challenges we ran into
The 3D printers going poof on the prints.
## How we built it
* AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing.
* MASV to transfer the 3D model files seamlessly.
* RBC reward system to incentivize users to engage more.
* Cohere to edit image prompts to be culturally appropriate for Flux to generate images.
* Groq to automatically edit the 3D models via LLMs.
* VoiceFlow to create an AI agent that guides the user through the product.
|
partial
|
.png)
## Inspiration
Over the last five years, we've seen the rise and the slow decline of the crypto market. It has made some people richer, and many have suffered because of it. We realized that this problem can be solved with data and machine learning - What if we can, accurately, predict forecast for crypto tokens so that the decisions are always calculated? What if we also include a chatbot to it - so that crypto is a lot less overwhelming for the users?
## What it does
*Blik* is an app and a machine learning model, made using MindsDB, that forecasts cryptocurrency data. Not only that, but it also comes with a chatbot that you can talk to, to make calculated decisions for your. Next trades.
The questions can be as simple as *"How's bitcoin been this year?"* to something as personal as *"I want to buy a tesla worth $50,000 by the end of next year. My salary is 4000$ per month. Which currency should I invest in?"*
We believe that this functionality can help the users make proper, calculated decisions into what they want to invest in. And in return, get high returns for their hard-earned money!
## How we built it
Our tech stack includes:
* **Flutter** for the mobile app
* **MindsDB** for the ML model + real time finetuning
* **Cohere** for AI model and NLP from user input
* **Python** backend to interact with MindsDB and CohereAI
* **FastAPI** to connect frontend and backend.
* **Kaggle** to source the datasets of historic crypto prices
## Challenges we ran into
We started off using the default model training using MindsDB, however, we realized that we would need many specific things like forecasting at specific dates, with a higher horizon etc. The mentors at the MindsDB counter helped us a real lot. With their help, we were able to set up a working prototype and were getting confident about our plan.
One more challenge we ran into was that the forecasts for a particular crypto would always end up spitting the same numbers, making it difficult for users to predict
Then, we ended up using the NeuralTS as our engine, which was perfect. Getting the forecasts to be as accurate as possible was definitely a challenge for us, while keeping it performant enough. Solving every small issue would give rise to another one; but thanks to the mentors and the amazing documentations, we were able to figure out the MindsDB part.
Then, we were trying to implement the AI chat feature, using CohereAI. We had a great experience with the API as it was easy to use, and the chat completions were also really good. We wanted the generated data from Cohere to generate an SQL query to use on MindsDB. Getting this right was challenging, as I'd always need the same datatype in a structured format in order to be able to stitch an SQL command. We figured this also out using advanced prompting techniques and changing the way we pass the data into the SQL. We also used some code to clean up the generated text and make sure that its always compatible.
## Accomplishments that we're proud of
Honestly, going from an early ideation phase to an entire product in just two days, for an indie team of two college freshmen is really a moment of pride. We created a fully working product with an AI chatbot, etc.
Even though we were both new to all of this - integrating crypto with AI techologies is a challenging problem, and thankfully MindsDB was very fun to work with. We are extremely happy about the mindsDB learnings as we can now implement it in our other projects to enhance them with machine learning.
## What we learned
We learnt AI and machine learning, using MindsDB, interacting with AI and advanced prompting, understanding user's needs, designing beautiful apps and presenting data in a useful yet beautiful way in the app.
## What's next for Blik.
At Blik, long term, we plan on expanding this to a full fledged crypto trading solution, where users can sign up and create automations that they can run, to "get rich quick". Short term, we plan to increase the model's accuracy by aggregating news into it, along with the cryptocurrency information like the founder information and the market ownership of the currency. All this data can help us further develop the model to be more accurate and helpful.
|
## Inspiration
We saw problems with different kinds of payment systems. There are usually long wait times, which is further increased by people searching for their wallets (not to forget the added processing fees). We wanted to create something that was easy to use and that would remove the annoyances and obtrusiveness of payment, and to do this, we decided to approach the problem using bleeding-edge yet proven technology, such as blockchain (not part of demo, but lightly implemented) and proximity payment.
## What it does
Bloqpay will eventually run on blockchain (to secure payment and avert processing fees). It tracks a consumer's location to add and confirm payment. For our current use case, this will add and confirm movie theatre purchases. Very useful for events with different areas and many other similar scenarios - the potential for this technology is unbound!
## How we built it
We used Android Studio, Microsoft Azure, a Raspberry and Sketch, along with technologies like NodeJS, Socket.IO, and others listed above.
## Challenges we ran into
One of the Raspberry Pi's failed, and we lacked a keyboard and mouse to debug. Later, we lacked a sd-card adapter to reinstall the OS, so we travelled to the mall and lost a lot of time gathering equipment. However, we were still able to pull together a very strong prototype.
## Accomplishments that we're proud of
Getting the tech to work, and overcoming some very tough obstacles. We also overcame a major bug we were facing right on time.
## What we learned
We learned that this technology and payment system will eventually become part of our daily lives (decentralization is the future of payment and autonomous software). We learned how to use the raspberry pi and different low-level bluetooth software to gather proximity data.
## What's next for Bloqpay
We want to tackle other use cases and implement Blockchain. We are definitely going to pursue this idea further outside of the hackathon space.
|
## 💡 Inspiration 💡
Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player!
This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors.
Ultimately, our project makes music more inclusive and brings people together through shared experiences.
## ❓What it does ❓
Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life!
## ⚙️ How we built it ⚙️
For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively.
## Challenges we ran into ⚔️
We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to
## Accomplishments that we're proud of 🏆
* Got a working robot to read and play piano music!
* File transfer working via SSH
* Conversion from MIDI to key presses mapped to fingers
* Piano playing melody ablities!
## What we learned 📚
* Working with Raspberry Pi 3 and its libraries for servo motors and additional components
* Working with OpenCV and fine tuning models for reading sheet music
* SSH protocols and just general networking concepts for transferring files
* Parsing MIDI files into useful data through some really cool Python libraries
## What's next for Ludwig 🤔
* MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves.
* Improved photo recognition for reading accents and BPM
* Realistic fingers via 3D printing
|
partial
|
## Inspiration
**As Computer Science is a learning-intensive discipline, students tend to aspire to their professors**. We were inspired to hack this weekend by our beloved professor Daniel Zingaro (UTM). Answering questions in Dan's classes often ends up being a difficult part of our lectures, as Dan is visually impaired. This means students are expected to yell to get his attention when they have a question, directly interrupting the lecture. Teachers Pet could completely change the way Dan teaches and interacts with his students.
## What it does
Teacher's Pet (TP) empowers students and professors by making it easier to ask and answer questions in class. Our model helps to streamline lectures by allowing professors to efficiently target and destroy difficult and confusing areas in curriculum. Our module consists of an app, a server, and a camera. A professor, teacher, or presenter may download the TP app, and receive a push notification in the form of a discrete vibration whenever a student raises their hand for a question. This eliminates students feeling anxious for keeping their hands up, or professors receiving bad ratings for inadvertently neglecting students while focusing on teaching.
## How we built it
We utilized an Azure cognitive backend and had to manually train our AI model with over 300 images from around UofTHacks. Imagine four sleep-deprived kids running around a hackathon asking participants to "put your hands up". The AI is wrapped in a python interface, and takes input from a camera module. The camera module is hooked up to a Qualcomm dragonboard 410c, which hosts our python program. Upon registering, you may pair your smartphone to your TP device through our app, and set TP up in your classroom within seconds. Upon detecting a raised hand, TP will send a simple vibration to the phone in your pocket, allowing you to quickly answer a student query.
## Challenges we ran into
We had some trouble accurately differentiating when a student was stretching vs. actually raising their hand, so we took a sum of AI-guess-accuracies over 10 frames (250ms). This improved our AI success rate exponentially.
Another challenge we faced was installing the proper OS and drivers onto our Dragonboard. We had to "Learn2Google" all over again (for hours and hours). Luckily, we managed to get our board up and running, and our project was up and running!
## Accomplishments that we're proud of
Gosh darn we stayed up for a helluva long time - longer than any of us had previously. We also drank an absolutely disgusting amount of coffee and red bull. In all seriousness, we all are proud of each others commitment to the team. Nobody went to sleep while someone else was working. Teammates went on snack and coffee runs in freezing weather at 3AM. Smit actually said a curse word. Everyone assisted on every aspect to some degree, and in the end, that fact likely contributed to our completion of TP. The biggest accomplishment that came from this was knowledge of various new APIs, and the gratification that came with building something to help our fellow students and professors.
## What we learned
Among the biggest lessons we took away was that **patience is key**. Over the weekend, we struggled to work with datasets as well as our hardware. Initially, we tried to perfect as much as possible and stressed over what we had left to accomplish in the timeframe of 36 hours. We soon understood, based on words of wisdom from our mentors, that \_ the first prototype of anything is never perfect \_. We made compromises, but made sure not to cut corners. We did what we had to do to build something we (and our peers) would love.
## What's next for Teachers Pet
We want to put this in our own classroom. This week, our team plans to sit with our faculty to discuss the benefits and feasibility of such a solution.
|
## Inspiration
We know that at Hack the North, and other similar Hackathon events, finding the right team can sometimes be one of the most crucial things when it comes to success. Groups with a wide range of skills tend to do better than solely tech or business-based. That's how we came up with Hatch, here to find your perfect Hackathon Match.
## What it does
Hatch is a web-based application that helps Hackathon participants find their perfect team. Based on the skillset needed for your team, the system will search the database for applicants that are most suitable for you. Through a card system, the team and applicant may send requests to each other to connect further. Both parties may then accept or reject the merger of the teams. In a matter of a few swipes, you are ready for your next Hackathon!
## How we built it
The design of the website was prototyped in Figma. From there, the front end is built using React JS. We used CockroachDB to store the data and we used express.js to connect the frontend to the backend.
## Challenges we ran into
Many of the challenges we encountered were due to a lack of experience of most team members in the frameworks we decided to use, like cockroachDB, express.js, and React. This caused large amounts of time to be spent debugging the project instead of implementing the necessary features. We also took more time than we would have liked to come up with our idea. Originally, we were planning on doing a hack with the AdHawk glasses, but we found out that other hackers already reserved all the glasses, so we had to think of another idea.
## Accomplishments that we're proud of
We are satisfied with how user-friendly our application is when it comes to navigating around the site. The card swiping system creates an efficient and effective condition for users to find the team they’re looking for while maintaining an enjoyable experience. In addition, being relatively new hackers, we are proud that we were able to work so cohesively as a team to brainstorm ideas, support each other and persevere through challenges.
## What we learned
We learned the importance of time management during stressful and time-constrained situations like hackathons. The time spent debugging and perfecting unnecessary details could be better spent elsewhere to improve the overall quality of the project. We also learned the importance of choosing not the theoretically best frameworks, but those that we are most familiar with. From planning ahead for the project to even deciding what time to sleep is best, time management plays a crucial role in determining a project’s success.
## What's next for Hatch
A more complex search algorithm for matching up hackers, taking into account common and wanted skills, number of teammates, and other improvements that will enhance the usability of the system.
In addition, we want to allow users to edit their profile and give them more options to customize their profile (links, resumes, etc.)
|
## Inspiration
As university students, we have been noticing issues with very large class sizes. With lectures often being taught to over 400 students, it becomes very difficult and anxiety-provoking to speak up when you don't understand the content. As well, with classes of this size, professors do not have time to answer every student who raises their hand. This raises the problem of professors not being able to tell if students are following the lecture, and not answering questions efficiently. Our hack addresses these issues by providing a real-time communication environment between the class and the professor. KeepUp has the potential to increase classroom efficiency and improve student experiences worldwide.
## What it does
KeepUp allows the professor to gauge the understanding of the material in real-time while providing students a platform to pose questions. It allows students to upvote questions asked by their peers that they would like to hear answered, making it easy for a professor to know which questions to prioritize.
## How We built it
KeepUp was built using JavaScript and Firebase, which provided hosting for our web app and the backend database.
## Challenges We ran into
As it was, for all of us, our first time working with a firebase database, we encountered some difficulties when it came to pulling data out of the firebase. It took a lot of work to finally get this part of the hack working which unfortunately took time away from implementing some other features (See what’s next section). But it was very rewarding to have a working backend in Firebase and we are glad we worked to overcome the challenge.
## Accomplishments that We are proud of
We are proud of creating a useful app that helps solve a problem that affects all of us. We recognized that there is a gap in between students and teachers when it comes to communication and question answering and we were able to implement a solution. We are proud of our product and its future potential and scalability.
## What We learned
We all learned a lot throughout the implementation of KeepUp. First and foremost, we got the chance to learn how to use Firebase for hosting a website and interacting with the backend database. This will prove useful to all of us in future projects. We also further developed our skills in web design.
## What's next for KeepUp
* There are several features we would like to add to KeepUp to make it more efficient in classrooms:
* Add a timeout feature so that questions disappear after 10 minutes of inactivity (10 minutes of not being upvoted)
* Adding a widget feature so that the basic information from the website can be seen in the corner of your screen at all time
* Adding Login for users for more specific individual functions. For example, a teacher can remove answered questions, or the original poster can mark their question as answered.
* Censoring of questions as they are posted, so nothing inappropriate gets through.
|
losing
|
## Inspiration
The attendance problem at UC Berkeley has not received any attention. Firstly, students are easily able to fake their attendance by sending Google Form sign-in links and Top Hat codes to their friends who are skipping class. Although using TopHat and Google Forms are feasible options in very small classrooms so that professors can perform a quick head count to verify, larger classes make it impossible to verify student attendance.
College instructors of larger classes often wonder if students are attending class regularly and paying attention. Today, iClickers are used as a classroom response system in larger classes. The hardware consists of a remote keypad device called clicker, a receiving device connected to a computer, and Reef software. iClicker models have limitations on the maximum number of clickers that can be supported in one classroom, thus making iclickers unscalable beyond a certain number. For example, computer science classes at Berkeley, with around 1800 students, would not be able to use clickers for attendance.
Further, using TopHat and Google Forms requires students to manually type in a URL and access a webpage, where they enter all of their student information and a secret attendance code. This code can be sent to a friend who is not present in the classroom within a matter of seconds, and the attendance is therefore inaccurate.
Another factor to consider is cost. TopHat subscription for each student is around $25 per semester. The iClickers are deployed to students at a total of $40 each, and the Reef software that supports iClickers costs the university and students an additional subscription fee! Students are responsible for carrying around the iClicker remote whenever they go to class, or else they do not receive credit for attendance, and the iClicker is subject to running out of charge. Students, in addition to paying $40 for the iClicker, must also pay $4 for batteries and are also responsible for replacing a broken iClicker, which comes with a very limited warranty.
Our team is looking for a more cost-effective solution without the issue of fake attendance whereby friends try to send attendance URLs/codes or sneak in a friend’s iClicker. Hence, we thought of TickIn!
## What it does
TickIn simply checks (“ticks”) you in. Using bluetooth and Arduino beacon technology, TickIn uses 3 simple steps to check students in.
1. Once the Arduino 101 beacon is turned on, it will emit low energy bluetooth waves.
2. Then, any bluetooth device that is within a close range of the arduino beacon will detect the signal. When the TickIn app is installed on the smartphone, the app will detect the bluetooth device specific to a course, and ask the student to log in using their college credentials.
3. Finally, the student will be able to press the send button if they are within the acceptable range of the arduino beacon, and the information will be sent to a web server. The server will collect, display, and analyze the data for the professor on a website using visual charts and graphics.
TickIn’s technology is scalable to a class of virtually any size. For example, classes as large as 2000 students can use TickIn’s technology to ensure that students are present in class, and TickIn’s built-in web analytics can analyze the data for the professor on the TickIn website.
## How We Built TickIn
For the TickIn App: We used XML to design the architecture of the app and Java to design the back-end for the app via Android Studio. API’s involved with the application itself are the Android API and Java API.
For the Arduino(Beacon): We used the language C which directly approaches the hardware.
For retrieving data / website: We used HTML/CSS for the frontend of the website, JavaScript and Ruby which function as the bridge between the frontend and the backend, and Rails and SQL for the backend of the web as the database.
## Challenges We Ran Into
We expected to have beacons at the venue, but there were none available. We instead treated Arduino 101 as a beacon because it features wireless bluetooth connectivity. Further, since none of our team members have programmed bluetooth devices and worked with MAC Addresses, a substantial amount of research was performed.
## Accomplishments that We're Proud Of
Some student response systems that are currently in use are iClickers, Top Hat, and Google Forms. The current systems are coming at an unnecessary additional cost for the students.
We strongly believe this product is a cost effective and easy to use solution to the attendance problem at Berkeley, benefitting both students and the university.
We can extend the usage of TickIn to other universities and schools, and even outside classroom settings. For instance, large events like hackathons can use TickIn to avoid long queues for participant/attendee check-in.
Overall, TickIn is portable as it is a smartphone application, requiring only the professor to carry the beacon around to their classrooms. TickIn is cost effective as the application can be deployed for a very low cost to the large student population and an estimated $25 for the professor. TickIn is scalable to any number of students and generates visually appealing reports on the data collected from the mobile app. Also, the same beacons can be shared between different classes by configuring the same ID for different timings and locations to identify another class. Most importantly, students can no longer fake their attendance credit as the student has to be physically present to send their information to the server through the TickIn application.
## What we learned
Connection between server and the app
How to create GIFs, use various APIs.
How to use arduinos as beacons (3D model for casing)
How to integrate our team’s varied skill set to create an effective product
Brainstorming and collaboration are key to success
Process of creating a product through ideation, design, development, testing, and deployment.
Object relationships are extremely important for data structure management
## What's next for TickIn
We first want to make the TickIn application to be compatible with iOS as well, so that both Android and iOS users can have access to the application. Also, we are planning to upgrade this app so that it not only checks attendance but also facilitates quizzes and conducts polls.
iClickers allow only professors to ask questions to the students, which leads to a one-way interaction between the professors and the students. Therefore, we are planning to design TickIn in a way such that students can also ask questions via the application. While iClickers do not allow students to type in anything, TickIn will provide text boxes for students so that lengthier responses and questions can be conveyed to the professor.
|
## Inspiration
We always that moment when you're with your friends, have the time, but don't know what to do! Well, SAJE will remedy that.
## What it does
We are an easy-to-use website that will take your current location, interests, and generate a custom itinerary that will fill the time you have to kill. Based around the time interval you indicated, we will find events and other things for you to do in the local area - factoring in travel time.
## How we built it
This webapp was built using a `MEN` stack. The frameworks used include: MongoDB, Express, and Node.js. Outside of the basic infrastructure, multiple APIs were used to generate content (specifically events) for users. These APIs were Amadeus, Yelp, and Google-Directions.
## Challenges we ran into
Some challenges we ran into revolved around using APIs, reading documentation and getting acquainted to someone else's code. Merging the frontend and backend also proved to be tough as members had to find ways of integrating their individual components while ensuring all functionality was maintained.
## Accomplishments that we're proud of
We are proud of a final product that we legitimately think we could use!
## What we learned
We learned how to write recursive asynchronous fetch calls (trust me, after 16 straight hours of code, it's really exciting)! Outside of that we learned to use APIs effectively.
## What's next for SAJE Planning
In the future we can expand to include more customizable parameters, better form styling, or querying more APIs to be a true event aggregator.
|
# Babble: PennAppsXVIII
PennApps Project Fall 2018
## Babble: Offline, Self-Propagating Messaging for Low-Connectivity Areas
Babble is the world's first and only chat platform that is able to be installed, setup, and used 100% offline. This platform has a wide variety of use cases such as use in communities with limited internet access like North Korea, Cuba, and Somalia. Additionally, this platform would be able to maintain communications in disaster situations where internet infrastructure is damaged or sabotaged. ex. Warzones, Natural Disasters, etc.
### Demo Video
See our project in action here: <http://bit.ly/BabbleDemo>
[](http://www.youtube.com/watch?v=M5dz9_pf2pU)
## Offline Install & Setup
Babble (a zipped APK) is able to be sent from one user to another via Android Beam. From there it is able to be installed. This allows any user to install the app just by tapping their phone to that of another user. This can be done 100% offline.
## Offline Send
All Babble users connect to all nearby devices via the creation of a localized mesh network created using the Android Nearby Connections API. This allows for messages to be sent directly from device to device via m to n peer to peer as well as messages to be daisy chain sent from peer to peer to ... to peer to peer.
Each Babble user's device keeps a localized ledger of all messages that it has sent and received, as well as an amalgamation of all of the ledgers of every device that this instance of Babble has been connected directly to via Android Nearby.
The combination of the Android Nearby Connections API with this decentralized, distributed ledger allows for messages to propagate across mesh networks and move between isolated networks as users leave one mesh network and join another.
## Cloud Sync when Online
Whenever an instance of Babble gains internet access, it uploads a copy of its ledger to a MongoDB Atlas Cluster running on Google Cloud. There the local ledger is amalgamated with the global ledger which contains all messages sent world wide. From there the local copy of the ledger is updated from the global copy to contain messages for nearby users.
## Use Cases
### Internet Infrastructure Failure: Natural Disaster
Imagine a natural disaster situation where large scale internet infrastructure is destroyed or otherwise not working correctly. Only a small number of users of the app would be able to distribute the app to all those affected by the outage and allow them to communicate with loved ones and emergency services. Additionally, this would provide a platform by which emergency services would be able to issue public alerts to the entire mesh network.
### Untraceable and Unrestrictable Communication in North Korea
One of the future directions we would like to take this would be a Ethereum-esq blockchain based ledger. This would allow for 100% secure, private, and untraceable messaging. Additionally the Android Nearby Connections API is able to communicate between devices via, cellular network, Wifi, Bluetooth, NFC, and Ultrasound which makes our messages relatively immune to jamming. With the mesh network, it would be difficult to block messaging on a large scale.
As a result of this feature set, Babble would be a perfect app to allow for open and unobstructed, censored, or otherwise unrestricted communication inside of a country with heavily restricted internet access like North Korea.
### Allowing Cubans to Communicate with Family and Friends in the US
Take a use case of a Cuba wide roll out. There will be a limited number of users in large cities like Havana or Santiago de Cuba that will have internet access as well as a number of users distributed across the country who will have occasional internet access. Through both the offline send and the cloud sync, 100% offline users in cuba would be able to communicate with family stateside.
## Future Goals and Directions
Our future goals would be to build better stability and more features such as image and file sharing, emergency messaging, integration with emergency services and the 911 decision tree, end to end encryption, better ledger management, and conversion of ledger to Ethereum-esq anonymized blockchain to allow for 100% secure, private, and untraceable messaging.
Ultimately, the most insane use of our platform would be as a method for rolling out low bandwidth internet to the offline world.
Name creds go to Chris Choi
|
partial
|
## Inspiration
**Introducing Ghostwriter: Your silent partner in progress.** Ever been in a class where resources are so hard to come by, you find yourself practically living at office hours? As teaching assistants on **increasingly short-handed course staffs**, it can be **difficult to keep up with student demands while making long-lasting improvements** to your favorite courses.
Imagine effortlessly improving your course materials as you interact with students during office hours. **Ghostwriter listens intelligently to these conversations**, capturing valuable insights and automatically updating your notes and class documentation. No more tedious post-session revisions or forgotten improvement ideas. Instead, you can really **focus on helping your students in the moment**.
Ghostwriter is your silent partner in educational excellence, turning every interaction into an opportunity for long-term improvement. It's the invisible presence that delivers visible results, making continuous refinement effortless and impactful. With Ghostwriter, you're not just tutoring or bug-bashing - **you're evolving your content with every conversation**.
## What it does
Ghostwriter hosts your class resources, and supports searching across them in many ways (by metadata, semantically by content). It allows adding, deleting, and rendering markdown notes. However, Ghostwriter's core feature is in its recording capabilities.
The record button starts a writing session. As you speak, Ghostwriter will transcribe and digest your speech, decide whether it's worth adding to your notes, and if so, navigate to the appropriate document and insert them at a line-by-line granularity in your notes, integrating seamlessly with your current formatting.
## How we built it
We used Reflex to build the app full-stack in Python, and support the various note-management features including addition, deleting, selecting, and rendering. As notes are added to the application database, they are also summarized and then embedded by Gemini 1.5 Flash-8B before being added to ChromaDB with a shared key. Our semantic search is also powered by Gemini-embedding and ChromaDB.
The recording feature is powered by Deepgram's threaded live-audio transcription API. The text is processed live by Gemini, and chunks are sent to ChromaDB for queries. Distance metrics are used as thresholds to not create notes, add to an existing note, or create a new note. In the latter two cases, llama3-70b-8192 is run through Groq to write on our (existing) documents. It does this through a RAG on our docs, as well as some prompt-engineering. To make insertion granular we add unique tokens to identify candidate insertion-points throughout our original text. We then structurally generate the desired markdown, as well as the desired point of insertion, and render the changes live to the user.
## Challenges we ran into
Using Deepgram and live-generation required a lot of tasks to run concurrently, without blocking UI interactivity. We had some trouble reconciling the requirements posed by Deepgram and Reflex on how these were handled, and required us redesign the backend a few times.
Generation was also rather difficult, as text would come out with irrelevant vestiges and explanations. It took a lot of trial and error through prompting and other tweaks to the generation calls and structure to get our required outputs.
## Accomplishments that we're proud of
* Our whole live note-generation pipeline!
* From audio transcription process to the granular retrieval-augmented structured generation process.
* Spinning up a full-stack application using Reflex (especially the frontend, as two backend engineers)
* We were also able to set up a few tools to push dummy data into various points of our process, which made debugging much, much easier.
## What's next for GhostWriter
Ghostwriter can work on the student-side as well, allowing a voice-interface to improving your own class notes, perhaps as a companion during lecture. We find Ghostwriter's note identification and improvement process very useful ourselves.
On the teaching end, we hope GhostWriter will continue to grow into a well-rounded platform for educators on all ends. We envision that office hour questions and engagement going through our platform can be aggregated to improve course planning to better fit students' needs.
Ghostwriter's potential doesn't stop at education. In the software world, where companies like AWS and Databricks struggle with complex documentation and enormous solutions teams, Ghostwriter shines. It transforms customer support calls into documentation gold, organizing and structuring information seamlessly. This means fewer repetitive calls and more self-sufficient users!
|
## Inspiration
Imagine you're sitting in your favorite coffee shop and a unicorn startup idea pops into your head. You open your laptop and choose from a myriad selection of productivity tools to jot your idea down. It’s so fresh in your brain, you don’t want to waste any time so, fervently you type, thinking of your new idea and its tangential components. After a rush of pure ideation, you take a breath to admire your work, but disappointment. Unfortunately, now the hard work begins, you go back though your work, excavating key ideas and organizing them.
***Eddy is a brainstorming tool that brings autopilot to ideation. Sit down. Speak. And watch Eddy organize your ideas for you.***
## Learnings
Melding speech recognition and natural language processing tools required us to learn how to transcribe live audio, determine sentences from a corpus of text, and calculate the similarity of each sentence. Using complex and novel technology, each team-member took a holistic approach and learned news implementation skills on all sides of the stack.
## Features
1. **Live mindmap**—Automatically organize your stream of consciousness by simply talking. Using semantic search, Eddy organizes your ideas into coherent groups to help you find the signal through the noise.
2. **Summary Generation**—Helpful for live note taking, our summary feature converts the graph into a Markdown-like format.
3. **One-click UI**—Simply hit the record button and let your ideas do the talking.
4. **Team Meetings**—No more notetakers: facilitate team discussions through visualizations and generated notes in the background.

## Challenges
1. **Live Speech Chunking** - To extract coherent ideas from a user’s speech, while processing the audio live, we had to design a paradigm that parses overlapping intervals of speech, creates a disjoint union of the sentences, and then sends these two distinct groups to our NLP model for similarity.
2. **API Rate Limits**—OpenAI rate-limits required a more efficient processing mechanism for the audio and fewer round trip requests keyword extraction and embeddings.
3. **Filler Sentences**—Not every sentence contains a concrete and distinct idea. Some sentences go nowhere and these can clog up the graph visually.
4. **Visualization**—Force graph is a premium feature of React Flow. To mimic this intuitive design as much as possible, we added some randomness of placement; however, building a better node placement system could help declutter and prettify the graph.
## Future Directions
**AI Inspiration Enhancement**—Using generative AI, it would be straightforward to add enhancement capabilities such as generating images for coherent ideas, or business plans.
**Live Notes**—Eddy can be a helpful tool for transcribing and organizing meeting and lecture notes. With improvements to our summary feature, Eddy will be able to create detailed notes from a live recording of a meeting.
## Built with
**UI:** React, Chakra UI, React Flow, Figma
**AI:** HuggingFace, OpenAI Whisper, OpenAI GPT-3, OpenAI Embeddings, NLTK
**API:** FastAPI
# Supplementary Material
## Mindmap Algorithm

|
## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users.
|
winning
|
## Inspiration
If we've learned anything from the 2016 US Presidential Election, it's that we often don't fully understand the other side's opinions. Here's an easy way to better inform yourself!
## What it does
The Other Side analyzes the content of the page you're currently browsing and scours the Internet for news articles about the same topic but written from the opposing political perspective. Snippets of these articles are presented in a pop-up.
## How we built it
The back-end server is Python, and the machine learning is handled mainly by Microsoft Azure ML Studio and sklearn. The corpus of data we used was text data from manually labelled well-known websites.
## Challenges we ran into
Firstly, it was extremely time-consuming and labor-intensive to gather the necessary labelled data for predicting political ideology. We scraped numerous news websites and learned how to use common web mining libraries such as Beautiful Soup. Furthermore, it is incredibly difficult to accurately predict political ideology from a small snippet of text, so acquiring an effective model required an extensive amount of cross-validation and experimentation. We used both Python's sklearn and Microsoft Azure ML Studio, both of which were new tools to us.
## Accomplishments that we're proud of
We're really proud and excited that we managed to make our ML models work! It was a really interesting and important challenge, and we hope to be able to apply these skills further in the future.
## What we learned
## What's next for The Other Side
|
## Inspiration
An informed electorate is as vital as the ballot itself in facilitating a true democracy. In this day and age, it is not a lack of information but rather an excess that threatens to take power away from the people. Finding the time to research all 19 Democratic nominee hopefuls to make a truly informed decision is a challenge for most, and out of convenience, many voters tend to rely on just a handful of major media outlets as the source of truth. This monopoly on information gives mass media considerable ability to project its biases onto the public opinion. The solution to this problem presents an opportunity to utilize technology for social good.
## What it does
InforME returns power to the people by leveraging Google Cloud’s Natural Language API to detect systematic biases across a large volume of articles pertinent to the 2020 Presidential Election from 8 major media sources, including ABC, CNN, Fox, Washington Post, and Associated Press. We accomplish this by scraping relevant and recent articles from a variety of online sources and using the Google Cloud NLP API to perform sentiment analysis on them. We then aggregate individual entity sentiments and statistical measures of linguistic salience in order to synthesize our data in a meaningful and convenient format for understanding and comparing the individual biases major media outlets hold towards or against each candidate.
## How we built it and Challenges we ran into
One of the many challenges we faced is learning the new technology. We dedicated ourselves to learning multiple GCP technologies throughout HackMIT from calling GCP API to serverless deployment. We employed Google NLP API to make sense of our huge data set scraped from major news outlets, Firebase real-time database to log data, and finally GCP App Engine for deployments of our web apps. Coming into the hackathon with little experience with GCP, we found the learning curve to be steep yet rewarding. This immersion in GCP technology renders us a deeper understanding of how different components of GCP work together, and how much potential GCP has for contributing to social good.
Another challenge we faced is how to represent the data in a visually meaningful way. Though we were able to generate a lot of insightful technical data, we chose to represent the data in a straightforward, easy-to-understand way without losing information or precision. It’s undoubtedly challenging to find the perfect balance between technicality and aesthetics, and our front-end design tackles this task of using technology for social good in an accessible way without compromising the complexity of current politics. Just as there’s no simple solution to current social problems, there’s no perfect way to contribute to social good. Despite this, InforME is an attempt to return power to the people, providing for a more just distribution of information and better informed electorate, a gateway to a society where information is open and accessible.
## What's next for InforME
Despite our progress, there is room for improvement. First, we can allow users to filter results by dates to better represent data in a more specific time range. We can also identify pressing issues or hot topics associated with each candidate via entity sentiment analysis. Moreover, with enough data, we can also build a graph of relationships between each candidates to better serve our audience.
|
## Inspiration
You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**.
## What it does
I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~
The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer.
In this prototype, the billboard analyzes the viewer's:
* **Dominant emotion** (from facial expression)
* **Age**
* **Gender**
* **Eye-sight (detects glasses)**
* **Facial hair** (just so that it can remind you that you need a shave)
* **Number of people**
And considers all of these factors to present with targeted ads.
**As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)**
## How I built it
Here is what happens step-by-step:
1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program)
2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result
3. Billboard analyzes the result and decides on which ads to serve (**Python** program)
4. Finalized ads are sent to the Billboard front-end via **Websocket**
5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine)
6. Repeat
## Challenges I ran into
* Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates)
* Putting many pieces of technology together, and ensuring consistency and robustness.
## Accomplishments that I'm proud of
* I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates.
## What's next for Interactive Time Square
* This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~
|
losing
|
## Inspiration
Minecraft has an interesting map mechanic where your character holds a map which "draws itself" while exploring the world. I am also very interested in building a plotter, which is a printer that uses a pen and (XY) gantry to produce images. These ideas seemed to fit together quite well.
## What it does
Press a button, copy GPS coordinates and run the custom "gcode" compiler to generate machine/motor driving code for the arduino. Wait around 15 minutes for a 48 x 48 output.
## How we built it
Mechanical assembly - Tore apart 3 dvd drives and extracted a multitude of components, including sled motors (linear rails). Unfortunately, they used limit switch + DC motor rather than stepper, so I had to saw apart the enclosure and **glue** in my own steppers with a gear which (you guessed it) was also glued to the motor shaft.
Electronics - I designed a simple algorithm to walk through an image matrix and translate it into motor code, that looks a lot like a video game control. Indeed, the stepperboi/autostepperboi main source code has utilities to manually control all three axes like a tiny claw machine :)
U - Pen Up
D - Pen Down
L - Pen Left
R - Pen Right
Y/T - Pen Forward (top)
B - Pen Backwards (bottom)
Z - zero the calibration
O - returned to previous zeroed position
## Challenges we ran into
* I have no idea about basic mechanics / manufacturing so it's pretty slipshod, the fractional resolution I managed to extract is impressive in its own right
* Designing my own 'gcode' simplification was a little complicated, and produces strange, pointillist results. I like it though.
## Accomplishments that we're proud of
* 24 hours and a pretty small cost in parts to make a functioning plotter!
* Connected to mapbox api and did image processing quite successfully, including machine code generation / interpretation
## What we learned
* You don't need to take MIE243 to do low precision work, all you need is superglue, a glue gun and a dream
* GPS modules are finnicky and need to be somewhat near to a window with built in antenna
* Vectorizing an image is quite a complex problem
* Mechanical engineering is difficult
* Steppers are *extremely* precise, and I am quite surprised at the output quality given that it's barely held together.
* Iteration for mechanical structure is possible, but difficult
* How to use rotary tool and not amputate fingers
* How to remove superglue from skin (lol)
## What's next for Cartoboy
* Compacting the design it so it can fit in a smaller profile, and work more like a polaroid camera as intended. (Maybe I will learn solidworks one of these days)
* Improving the gcode algorithm / tapping into existing gcode standard
|
## Inspiration
I've always been fascinated by the complexities of UX design, and this project was an opportunity to explore an interesting mode of interaction. I drew inspiration from the futuristic UIs that movies have to offer, such as Minority Report's gesture-based OS or Iron Man's heads-up display, Jarvis.
## What it does
Each window in your desktop is rendered on a separate piece of paper, creating a tangible version of your everyday computer. It is fully featured desktop, with specific shortcuts for window management.
## How I built it
The hardware is combination of a projector and a webcam. The camera tracks the position of the sheets of paper, on which the projector renders the corresponding window. An OpenCV backend does the heavy lifting, calculating the appropriate translation and warping to apply.
## Challenges I ran into
The projector was initially difficulty to setup, since it has a fairly long focusing distance. Also, the engine that tracks the pieces of paper was incredibly unreliable under certain lighting conditions, which made it difficult to calibrate the device.
## Accomplishments that I'm proud of
I'm glad to have been able to produce a functional product, that could possibly be developed into a commercial one. Furthermore, I believe I've managed to put an innovative spin to one of the oldest concepts in the history of computers: the desktop.
## What I learned
I learned lots about computer vision, and especially on how to do on-the-fly image manipulation.
|
## What it does
It can tackle the ground of any surface and any shape using its specially managed wheels!
## How we built it
I had built this using arduino and Servos
## Challenges we ran into
The main challenge I ran through are as follows:-
1. To make the right angle of the bot with the servo
2.To power the whole system !
## Accomplishments that we're proud of
I am proud that I had made the whole project just during the hackathon though everything was not completed still achieved at least what I wanted!
|
winning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.