anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
## Inspiration
Our inspiration has and will always be to empower various communities with tech. In this project, we attempted to build a product that can
by used by anyone and everyone to have some fun with their old photos and experience them differently, using mainly auditory sense.
Imagine you had an old photo of your backyard from when you were 5 but you don't remember what it sounded like back then, that's a feeling
we are trying to bring back and inject some life into your photos.
## What it does
The project ReAlive adds realistic sounding audio to any photo you want, trying to recreate what it would have been like back at the time
of the photo. We take image as an input and return a video that is basically the same image and an audio overalyed. Now this audio is
smartly synthesized by extracting information from your image and creating a mapping with our sound dataset.
## How we built it
Our project is a web-app built using Flask, FastAPI and basic CSS. It provides a simple input for an image and displays the video after
processing it on our backend. Our backend tech-stack is Tensorflow, Pytorch and Pydub to mix audio, with the audio data sitting on Google
Cloud storage and Deep Learning models deployed on Google Cloud in containers. Our first task was to extract information from the image and
then create a key-map with our sound dataset. Past that we smartly mixed the various audio files into one to make it sound realistic and
paint a complete picture of the scene.
## Challenges we ran into
Firstly, figuring out the depth and distance calculation for monochannel images using CNN and OpenCV was a challengin task.
Next, applying this to sound intensity mapping ran us into few challenges. And finally, deploying and API latency was a factor we had to deal with and optimize.
## Accomplishments that we're proud of
Building and finishing the project in 36 hours!!!
## What we learned
We learned that building proof-of-concepts in two days is a really uphill task. A lot of things went our way but some didn’t, we made it at the end and our key learnings were:
Creating a refreshing experience for a user takes a lot of research and having insufficient data is not great
## What's next for ReAlive
Image animation and fluidity with augmented sound and augmented reality.
|
## Inspiration
What if I want to take an audio tour of a national park or a University campus on my own time? What if I want to take an audio tour of a place that doesn't even offer audio tours?
With Toor, we are able to harness people's passions for the places they love to serve the curiosity of our users.
## What it does
We enable users to submit their own audio tours of the places they love, and we allow them to listen to other user submissions as well. Users can also elect to receive a text alert if a new audio tour has been updated for a specific location.
## How we built it
We built the front-end using React, and back-end with multiple REST API endpoints using Flask. Flask then uses SQLAlchemy, an ORM to submit records to the SQLite3 database and query data to and from. The audio files are stored in Google Cloud Firebase database. The front end is also hosted on Firebase.
## Challenges we ran into
Enabling users to listen to audio without having to repeatedly download the files was our first major obstacle. With some research we found that either an AWS S3 bucket or a Google Firebase database would solve our problems. After issues with permission with the AWS S3 bucket, we decided that Google Firebase would be a more apt solution to our issue.
## Accomplishments that we're proud of
Enabling audio streaming was a big win for us. We are also proud of the our team synergy and how we got things done quickly. We also are proud of the fact that we applied a lot of the things we learned from our internships this summer.
## What we learned
* Audio streaming, audio file upload
* Upload audio player on react
* Thinking about minimal viable product
* Flask
* Soft skills such as interpersonal communication with fellow hackers
## What's next for Toor
Adding the ability to comment on an audio tour, expanding the scope outside of just college campus, using Google Cloud Platform to implement Speech-To-Text and NLP to filter out "bad" comments and words in audio files.
|
## Inspiration
We want to have some fun and find out what we could get out of Computer Vision API from Microsoft.
## What it does
This is a web application that allows the user to upload an image, generates an intelligent poem from it and reads the poem out load with different chosen voices.
## How we built it
We used Python interface of Cognitive Service API from Microsoft and built a web application with django. We used a public open source tone generator to play different tones reading the poem to the users.
## Challenges we ran into
We learned django from scratch. It's not very easy to use. But we eventually made all the components connect together using Python.
## Accomplishments that we're proud of
It’s fun!
## What we learned
It's difficult to combine different components together.
## What's next for PIIC - Poetic and Intelligent Image Caption
We plan to make an independent project with different technology than Cognitive Services and published to the world.
|
partial
|
## Inspiration
Dealing with invasive species costs $30 billion in Canada; in the United States, $120 billion. Globally, the estimated cost is $1.4 trillion (roughly 5% of the global economy). Very few of us think about this on a daily basis, but there are certain demographics that spend a great amount of time outside and discovering the world: the children! Why not have them take images and track invasive species under the guise of a game?
## What It Does
The app provides a fun and interactive game intended to incentivize children to go outside and explore the nature, all the while collecting data. It allows users to take a photo of a wide range of species (plants, animals, fish, etc.), then uses a tailor-made ML model to classify it. The user then receives points for a successful classification (bonus points if the species is invasive) and is able to see all of their past "captures" on an interactive map. Lastly, there is a web page that contains a map of all the invasive species captured by users.
## How We Built It
The front-end was developed in Swift, and communicates with the back-end via HTTP requests and the Google Firebase SDK. The back-end is a Flask server that also contains a custom fastai model, trained on over 100 invasive species found in Ontario. All user data is stored on Google Firebase. The web map was done with the Mapbox API.
## Challenges We Ran Into
We had many, many issues trying to get the Flask server running on other platforms. We originally planned to have the back-end hosted on Google Cloud's App Engine; this worked fine until we tried deploying the model (we tried different workarounds for 5 hours). We then tried DigitalOcean, but multiple email addresses we used had their accounts automatically locked for some strange reason. We finally were able to get a DigitalOcean droplet running, but once again were able to deploy everything but the model (we figured it was some problem with the fastai library). Next was Heroku; setup went smoothly, but we were unable to deploy yet again. We finally settled on just running the back-end on locally on one of our computers since our time could be used more productively elsewhere.
## Accomplishments That We're Proud Of
We were really proud of our idea and working from scratch. Our classification model was trained from scratch, the front-end was done in Swift with few libraries and packages, and the back-end was developed with bare-bones Flask and Google Firebase imports. We were also proud of our app idea, and thought it could have a real-world impact on the environment and possibly even education (provided we add more features to the app).
## What We learned
On the technical side, this was the first time our mobile developer had integrated a map or camera functionality into an iOS app. This was also the first time for us to try to host something on Google App Engine or Digital Ocean; while it didn't work in the end, we learned a bit about each platform and Docker. We also learned about invasive species, and the power of teamwork!
## What's next for Vitae
Our first change would be to host it as a proper instance on a third-party platform so it could actually be used; this would allow users to begin populating the database. Then end-goal is to have enough geotagged data to provide research with enough information to get a general sense of where invasive species are spreading to, and at what rate. We'd also like to train the model on even more species/images.
|
## Inspiration
We were driven to created an educational spell-checking tool. After brain-storming and the early stages of development we found a lot of inspiration in the functionality and visual appeal of Grammarly. This culminated in ultimately making our own version of the popular spell-checker that tries to make improvements on it in some areas.
## What it does
Takes a string of text as input from the user and identifies English words that are misspelled or are not common Proper-nouns. Grummerly recognizes over 370,000 words. It then generates a set of possible intended spellings for each misspelling. To avoid making ineffectual suggestions, users will receive recommendations from the 60, 000 most commonly used words provided there is a sufficient amount, otherwise they will receive suggestions from the complete set of 370, 000+ words. In addition, if a word occurs to frequently in a sentence the user will also be shown suggested synonyms for the word.
## How we built it
The interfaced was made using HTML which makes requests to the back-end via Flask. In the back-end
the input string is parsed, analyzed, and acted upon in the methods described above using a Python script. Flask then returns the result to the user via the front-end program.
## Challenges we ran into
One of the challenges we ran into was getting a Flask app set up so we could integrate the back and front ends. This was the first time any of us were working with Flask. Through diligent research and learning we managed to figure out enough, and develop enough intuition to get everything set up.
## Accomplishments that we're proud of
We're proud that we were able to turn an idea into a functional prototype with just 26 hours of development. This was the first time most of us had operated under such strict time-constraints! We are proud of how much we learned. specifically how learned that we had a lot of the skills required to put a hackathon project together than we originally thought, and what we didn't know we were able to learn quite quickly.
## What's next for Grummerly
Making it work with more grammar issues and highlighting specific grammar issues to the user in-text. We'd also like to improve the user interface. We are hoping to make an extension that integrates into the workplace of the user.
|
## Inspiration 🌱
Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness.
## What it does 🌎
Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍.
## How we built it 🧑💻
We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo.
## Challenges we ran into 🥊
One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from.
## Accomplishments that we're proud of 🏆
We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities.
## What we learned 💚
Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react.
## What's next for BeGreen ✨
It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
|
losing
|
## Inspiration
We've always wanted to use Microsoft Surfacebooks, but we've been stuck with MacBooks at work. That's we decided to make an affordable hardware hack that brings amazing touchscreen capabilities to the MacBook and integrates with your existing workflows.
## What it does
Turn any laptop into a touchscreen device (with only $5 worth of hardware).
## How we built it
Using object detection algorithms and homography we are able to detect when a user taps the screen. We created the hardware component with household items that total to less than $5.
## Challenges we ran into
We were having difficulties with calibrating our touch detection algorithms. But after tweaking hyperparameters and using Microsoft Azure for compute we were able to create a seamless experience.
## Accomplishments that we're proud of
Enabling touchscreen on a MacBook with less than $5 worth of hardware was an amazing accomplishment for us.
Figuring out how to detect finger clicks with just a mirror and OpenCV was a really interesting problem to solve.
## What we learned
We learned how to use Microsoft Azure Virtual Machines for additional compute. Really saved our hack :)
## What's next for MacinTouch
A sleeker hardware design that can be mass produced and is easily portable.
|
## Inspiration
As young adults who have grown up in a diverse environment, we have noticed plenty of people who have disabilities such as being deaf or having speech impairments. We have many people close to us who suffer from these disabilities and we find it difficult to communicate with them as they do not understand verbal language and we do not understand sign language. So we decided a way for us to understand them through a sign language translator!
## What it does
Our app can recognize what letter is being shown from hands that show sign language from A-Z! This will be a live app and all you need is a camera/webcam of any sort. You can run the program and try some hand signs and the computer will translate it!
## How we built it
We built it using only Python and with the main use of OpenCV and TensorFlow. We started off with the help of a library from cvzone which had functions that could easily detect hand movement. We also used NumPy and time to help build this project. We took 300 pictures of each hand sign to train with and then put them into Teachable Machine from Google to train a model using TensorFlow and Keras and inputted it into our program.
## Challenges we ran into
We ran into a lot of issues initially which consisted of many compatibility issues between the libraries, languages, and APIs being used. We made the mistake of using more challenging platforms for object detection and machine learning which had us working around huge errors in the code. Eventually after a lot of research, we were able to find resources such as CVzone and Teachable Machine that helped to simplify the process and minimize areas for error. We also struggled with getting enough pictures to train our models for more accuracy.
## Accomplishments that we're proud of
We are proud of the multitude of signs that our program can identify, which is shown from the A-Z we can see, which is a total of 26 different signs. This took us the most time and was incredibly time-consuming so we are proud of it.
## What we learned
We learned how to use OpenCV as most of us were inexperienced with the library and the same with TensorFlow. We were able to learn to use object detection to detect objects such as hands and identify different hand signs effectively.
## What's next for CogniSign
The next step for CogniSign is to focus on making the program more accurate and creating more precise models. This can be done by increasing the number of models/images used to train per letter while also providing variety in the models such as showing multiple angles. After that, CogniSign aims to include common phrases used in ASL language as we are aware the ASL community doesn't communicate exclusively with signed letters. This would require much more data and machine learning.
|
## Inspiration
Ever wish you didn’t need to purchase a stylus to handwrite your digital notes? Each person at some point hasn’t had the free hands to touch their keyboard. Whether you are a student learning to type or a parent juggling many tasks, sometimes a keyboard and stylus are not accessible. We believe the future of technology won’t even need to touch anything in order to take notes. HoverTouch utilizes touchless drawings and converts your (finger)written notes to typed text! We also have a text to speech function that is Google adjacent.
## What it does
Using your index finger as a touchless stylus, you can write new words and undo previous strokes, similar to features on popular note-taking apps like Goodnotes and OneNote. As a result, users can eat a slice of pizza or hold another device in hand while achieving their goal. HoverTouch tackles efficiency, convenience, and retention all in one.
## How we built it
Our pre-trained model from media pipe works in tandem with an Arduino nano, flex sensors, and resistors to track your index finger’s drawings. Once complete, you can tap your pinky to your thumb and HoverTouch captures a screenshot of your notes as a JPG. Afterward, the JPG undergoes a masking process where it is converted to a black and white picture. The blue ink (from the user’s pen strokes) becomes black and all other components of the screenshot such as the background become white. With our game-changing Google Cloud Vision API, custom ML model, and vertex AI vision, it reads the API and converts your text to be displayed on our web browser application.
## Challenges we ran into
Given that this was our first hackathon, we had to make many decisions regarding feasibility of our ideas and researching ways to implement them. In addition, this entire event has been an ongoing learning process where we have felt so many emotions — confusion, frustration, and excitement. This truly tested our grit but we persevered by uplifting one another’s spirits, recognizing our strengths, and helping each other out wherever we could.
One challenge we faced was importing the Google Cloud Vision API. For example, we learned that we were misusing the terminal and our disorganized downloads made it difficult to integrate the software with our backend components. Secondly, while developing the hand tracking system, we struggled with producing functional Python lists. We wanted to make line strokes when the index finger traced thin air, but we eventually transitioned to using dots instead to achieve the same outcome.
## Accomplishments that we're proud of
Ultimately, we are proud to have a working prototype that combines high-level knowledge and a solution with significance to the real world. Imagine how many students, parents, friends, in settings like your home, classroom, and workplace could benefit from HoverTouch's hands free writing technology.
This was the first hackathon for ¾ of our team, so we are thrilled to have undergone a time-bounded competition and all the stages of software development (ideation, designing, prototyping, etc.) toward a final product. We worked with many cutting-edge softwares and hardwares despite having zero experience before the hackathon.
In terms of technicals, we were able to develop varying thickness of the pen strokes based on the pressure of the index finger. This means you could write in a calligraphy style and it would be translated from image to text in the same manner.
## What we learned
This past weekend we learned that our **collaborative** efforts led to the best outcomes as our teamwork motivated us to preserve even in the face of adversity. Our continued **curiosity** led to novel ideas and encouraged new ways of thinking given our vastly different skill sets.
## What's next for HoverTouch
In the short term, we would like to develop shape recognition. This is similar to Goodnotes feature where a hand-drawn square or circle automatically corrects to perfection.
In the long term, we want to integrate our software into web-conferencing applications like Zoom. We initially tried to do this using WebRTC, something we were unfamiliar with, but the Zoom SDK had many complexities that were beyond our scope of knowledge and exceeded the amount of time we could spend on this stage.
### [HoverTouch Website](hoverpoggers.tech)
|
losing
|
**Elevate your foreign language fluency with tailored guidance and individualized verbal conversation practice on Ekko.**
## Inspiration
Multilingualism is in. In today’s interconnected world, the ability to communicate in multiple languages is not only valuable, but imperative to personal and professional success. Whether it be conversing with business professionals during international commerce exchanges, preparing for standardized school language exams, serving as a journalist or diplomat on the international stage, or simply wanting to converse more fluently with your loved ones, everybody can benefit from conversing in a foreign language. However, traditional curriculum-based language learning methods rely on repetitive exercises and lack personalization, overlooking verbal fluency markers such as verbal precision, language variation, and personal goals. A key indication of an advanced speaker is the ability to engage in debates and convey subtle shades of meaning effectively, a skill that cannot be developed solely through the use of apps centered around memorization.
As a 2nd generation Mandarin and Cantonese speaker, one of our team members realized firsthand how difficult it was to maintain fluency of foreign languages in university. Additionally, while visiting her grandmother in the ICU at a reputable San Francisco hospital, another team member noticed that it was frustrating—and potentially life threatening—for non-English speaking patients to communicate their needs to care providers since these care providers are not trained conversationally since they received very rudimentary academic foreign language training. Although these bilingual care providers were technically licensed to care for non-English speaking patients, most of them who learned the foreign language as a second language were unable to demonstrate spoken proficiency and cultural awareness outside of a classroom context. These experiences all led us to develop Ekko.
Whether you’re a busy professional looking to enhance your global marketability or a student aiming to broaden your cultural horizons during study abroad, Ekko enables you to access verbal language practice anytime, anywhere, offering you the personalization and flexibility to learn at your own pace and develop as a global citizen.
## What it does
Introducing **Ekko**: a personalized real time AI vocal chatbot that assesses your vocal language fluency.
With Ekko, you only talk about what you actually want to talk about. Once you enter your basic onboarding information into Ekko such as your learning goals and interests, the app will then prompt you to a simple user interface where you can start your conversation. After each response, Ekko will then give personalized feedback on your conversational performance by catching your errors and providing you with an ACTFL based proficiency level. Conversations and feedback are personalized to your learning goals; for example, if you are using Ekko to prepare for career oriented work purposes, Ekko would generate prompts that you’d likely encounter in the workplace and the feedback would likely be centered around making your diction more formal. Similarly, if you were simply using Ekko to converse with friends and family, conversation topics and corrections provided by the chatbot would be more casual.
Ekko saves your speaking errors and transfers those language specific content errors to tailor feedback to your language learning goals. For example, if you were to say *“me llamo es Cole”* as opposed to the correct version: *“me llamo Cole”*, Ekko would save that error to check in the future. Using this unique feature, Ekko also makes connections between your learning language of interest and languages you currently speak (inputted during the onboarding process), drawing parallels between the two.
Similarly, if you were to consecutively respond with singular word responses, Ekko would suggest that you vary your sentence structure to maximize the effectiveness of the conversation.
Unlike pre-existing language learning applications such as Duolingo, Ekko is *not* based on a curriculum, meaning that you take full reign of the conversation and practice.
## How we built it
To make Ekko as capable as possible, we used a combination of many AI and machine learning technologies—most of which we had never used before.
Because Ekko’s main value proposition is its conversational aspect, it was important that conversing with the platform is as natural as possible. This included using a state-of-the-art text-to-speech model, powered by ElevenLabs, as well as speech-to-text, powered by Deepgram. The combination of these two technologies made natural conversation on Ekko a seamless experience.
Processing speed was also of utmost importance to us to make the conversations feel natural. Hence, the obvious choice for us was to power our backend using Bun. Specifically, we’re running an Elysia.js server to interface with our ML and large language models for incredibly fast performance. This strategic choice contributed to Ekko's impressive performance and responsiveness during interactions.
Regarding large language models, Ekko chose to go full open-source thanks to Together.AI. We’re using the "NousResearch/Nous-Hermes-2-Yi-34B" model to generate responses from the AI agent, as well as "togethercomputer/m2-bert-80M-32k-retrieval" for text embeddings. These models were blazingly fast and out-performed the multitude of other models we tested for these purposes.
To store the data we collected, we chose to use the Convex.dev platform. We’re leveraging their database and authentication services, as well as function calling and vector database. Using Convex enabled us to build a complex platform with many simultaneous and interconnected processes in such a limited time span.
In order to classify the user’s proficiency, we built a text classification model using scikit learn. To train this model, we generated a synthetic dataset of hypothetical conversations that corresponded to specific ACTFL Proficiency guidelines using Together.AI’s "NousResearch/Nous-Hermes-2-Yi-34B" model. This model, hosted on GCP Vertex AI platform, enables us to specifically denote the user’s progress as they reach fluency.
Altogether, Ekko's development is characterized by a comprehensive integration of state-of-the-art technologies. The emphasis on natural conversation, swift processing, open-source language models, efficient data handling through Convex.dev, and a proficiency-classifying text model collectively contribute to Ekko's prowess as an advanced conversational language learning platform powered by frontier tech.
## Business model
In regards to our business model, we initially looked into adopting a Freemium model, but ultimately steered away from that inclination due to not wanting to exacerbate accessibility issues in the edtech space. For now, we intend for all Ekko features to be free of charge, and eventually rely on community partnerships and sponsorships with relatively small organizations such as out patient clinics in order to cover costs. In the future during the reiteration phase, we also plan on hosting a donation platform to raise money for our developing team, as well as to purchase technology in underprivileged schools so that students worldwide can use Ekko. We also want to look into partnerships with larger organizations that would benefit from improved language fluency services such as hotels and universities.
## Challenges we ran into
One of the major challenges we encountered was finding an adequate fluency metric to score user responses. While percentages and other numerical metrics seemed like an obvious choice, this would also mean that the longer a user were to maintain a conversation (typically holding a longer conversation is a good thing when practicing foreign languages), the higher percent error they’d receive, thus deterring users from talking for longer periods of time. We eventually settled on the idea of using qualitative feedback based on the well-established ACTFL Language Speaking category rankings that contained specific comprehension and fluency requirements under each conversation difficulty level. The scoring would be based off the average ACTFL score of the five most recent responses provided.
Our team also raised larger scale questions pertaining to stuttering and speech impediments, as many fluent speakers often naturally stutter while talking and the STT model could interpret that as lack of fluency. Moreover, usage of slang is also something that we need to look into a bit further, as the current system struggles to interpret colloquial vernacular.
## What we learned
Through developing Ekko, we learned that different languages present different challenges with TTS that we must reconsider when building past our MVP.
We also learned the major advantages that active conversation has over repetitive exercises when practicing a foreign language, as active conversation provides learners the opportunity to contextual learning in practical and authentic situations through immediate correction while repetitive exercises solely focus on reinforcing specific patterns and foundational drills and lack spontaneity of real-life language.
## What's next for Ekko
During our next iteration of Ekko, we hope to also launch a real-time typing version of our personalized chatbot that would simulate your ideal interlocutor in both content and formality. We are also looking to implement a feature that encourages users to utilize figurative language in their speech. For example, if the user were using Ekko to improve their English proficiency and they told our chatbot "*It’s raining very hard outside,*” Ekko would highlight that sentence and perhaps suggest: “*It’s raining cats and dogs*” or the more casual “*It’s pouring.*" Another feature we are looking to implement post-MVP is a time suggestion feature, as it is a useful skill to show a comprehensive understanding of the other person’s input while also keeping responses pertinent and cutting off unnecessary fluff. This feature would especially come in handy to those preparing for professional interviews.
Now more on Ekko’s social component. Our team would like to develop a social component to Ekko integrating a gamified social element that allows students to build profiles, connect with other students, and compare streaks with friends. We would also consider gamifying the conversations with fun interaction challenge modes simulating Heads Up or Hot Seat. This would not only incentivize users to practice their verbal communication even more, but would also be especially helpful to those using Ekko to prepare for less formal conversational settings.
In regards to getting the word about Ekko out there, our team would launch a guerilla marketing campaign, pushing out content on all social media platforms and attending in-person hackathons and conventions to get initial feedback during beta testing.
In addition to partnerships with small clinics and healthcare providers, we would also gradually partner with more outside organizations such as university residential housing and career services, refugee councils helping young asylum seekers, and larger international corporations to implement Ekko into their daily regimens.
Lastly, we would also like to launch a donation platform to support the developing team and generate donations for tech for underprivileged schools so they can continue using Ekko.
## Ethical Discussion
First and foremost, the responsible use of large language models (LLMs) has engendered immense ethical debate, as they can inadvertently perpetuate representation biases from the data used to train them, thereby amplifying existing linguistic and cultural prejudice. Thus, it is imperative that developers—especially developers of language learning applications—stress the importance of cultural sensitivity, empathy, and respect for linguistic diversity in language learning communities. LLMs also pose security risks that must be mitigated through robust cybersecurity measures, and as with any web application, concerns regarding data privacy and security raise concerns about the safeguarding of personal user information within educational platforms. However, we see Ekko being a safer alternative to similar platforms such as TalkAbroad, as users are chatting verbally with our chatbot rather than on live video call with an individual they are unfamiliar with.
Additionally, another key factor in promoting inclusivity in educational technology is ensuring accessibility to technologies and devices. Since we intend on Ekko being used in underserved school communities where access to devices is not always guaranteed, in the future, we want to collect donations and sponsors to purchase devices for underprivileged schools so they can continue using Ekko.
Furthermore, current speech-to-text platforms overlook individuals impacted by speech impediments, creating potential barriers to participation. Through rounds of reiteration and beta testing, we hope to eventually develop a version of Ekko that accounts for individuals with speech and learning disabilities.
Additionally, the development of Ekko could financially impact those who rely on virtual conversation exchange provider services such as TalkAbroad for supplementary income.
Lastly, monetization strategies and pricing models such as the popular Freemium model can exacerbate educational inequities and access. Though our team has collectively decided to offer all our services free of charge, that does put us at a stalemate when discussing how we will monetize. Although the potential positive impact of Ekko is rife, it is still crucial that we are diligent in navigating these complexities. It is imperative for us to address these issues conscientiously, ensuring that educational language fluency technologies remain accessible, equitable, and respectful of diverse linguistic and cultural backgrounds.
## Citations
<https://arxiv.org/pdf/2307.06435.pdf>
|
## Inspiration
Medical students don't get enough practice with patients to prepare them for modern medical practice. I know this from my own experience as a medical student, who struggled to talk to enough patients with enough a variety of medical problems. Because of this, I even failed my final practical exam, and was forced to pay ~$1,000 out-of-pocket to go on a finals re-sit preparation course - where actors were hired to pretend to be patients.
Fast forward a few years, and I am now on the faculty of a medical school and see this problem still persisting. Moreover, through my volunteering work, I have seen that in low-resource settings, although a lot of patients may be present, there are very limited opportunities for medical trainees to get supervision and feedback on their diagnostic approach.
As a team, we wanted to see if we could use AI and tech to solve this problem.
## What it does
Humaine is an online conversational AI platform that provides virtual patients for the deliberate practice of medical students. Students login online or via a smartphone app to access virtual patients with varying medical conditions, and practice with these patients at will.
This offers students deliberate, individual practice that is completely safe, as the patients are virtual. It offers access to rare and difficult cases, while providing instant feedback, with defined and measurable outcomes. All of this leads to much improved learning and a greater diagnostic cognitive skill-set.
## How we built it
We built a prototype conversational agent using Google Dialogflow and IBM Watson. We used flask for the back-end and react native for the front end. We incorporated sentiment analysis into the framework to feedback on the student's patient manner. We also used the Google Cloud Platform for the speech-to-text and the text-to-speech APIs.
## Challenges we ran into
Our first prototype on Dialogflow was restricted by a serial decision tree, which did not allow us to have a free-flowing conversation. One of our team members was more familiar with IBM Watson and was confident that we could overcome this problem using that platform, and this is why we switched.
Due to time constraints, we limited the scope of the demonstration on only questions relating to pain - but given more time, we could incorporate a full medical history.
## Accomplishments that we're proud of
Our team has a very diverse background and set of skills. One of our team members is a high school student who learnt about API integration while doing this project, and managed to execute it well. One of our members had no technical skills but was able to work efficiently with the whole team. One of our team members had little knowledge of Python and APIs, but through the execution of the project, he became proficient at these. And finally, one of our team members traveled all the way from India!
## What we learned
We were able to get past our initial technical challenges and make a dynamic conversational agent. At the beginning, we had deep misgiving regarding whether this project was even technically achievable. However, we surprised even ourselves in our execution!
We learnt much about APIs, algorithmic thinking, systems thinking, and NLP logic and design. To expand on the NLP, we were also able to train and implement a model to perform sentiment analysis on audio files. This model is part of a set of NLP tools that we created in order to improve the content and tone of the doctor's assessment.
We learnt about hosting a website - even though we failed to execute this at the last minute :(
## What's next for Humaine
We are convinced that this idea has potential to revolutionize medical education, and to offer high-quality feedback to medical trainees in low-resource settings. We feel that this idea even has potential as a commercially successful startup.
|
## Inspiration
How can we make learning English accessible, engaging, and effective for everyone? We saw the struggles non-native speakers face, from pronunciation challenges to mastering grammar. This inspired us to harness the potential of artificial intelligence to create a personalized learning experience that adapts to each user's unique needs.
We are driven by the vision of breaking down language barriers and opening up new opportunities for people around the world. Our goal is to empower learners with the confidence and skills to communicate fluently in English, unlocking their full potential in personal, academic, and professional spheres.
## What it does
Lingo AI is a web application that helps users experience real-life conversations. By conversing with our AI, users receive instant feedback on their speech, improving their pronunciation, grammar, and fluency.
## How we built it
* Next.js: Server-rendered React framework
* React.js: UI library
* Flask: Python web framework
* Tailwind CSS: Utility-first CSS framework
* Hume API: Emotion analysis AI
* OpenAI API: Natural language processing
* Firebase: Backend services
* Uiverse: UI components library
* npm: JavaScript package manager
* Anaconda: Python distribution
* Google Auth: User authentication
## Challenges we ran into
* Learning documentation: Improved technical skills but caused initial frustration and delays.
* Setting up environment: Time-consuming and affected productivity.
* Using ChatGPT's API: Required extra effort to format data correctly, slowing progress.
* Lack of sleep: Reduced focus and efficiency.
* First-time team collaboration: Required adjustments in communication and workflow.
* Driving 7 hours before hackathon: Led to fatigue and a challenging start.
## Accomplishments that we're proud of
* Learning technologies on the fly: Demonstrated adaptability and quick learning.
* Effective task organization and communication: Improved teamwork and project management.
* Determination despite exhaustion: Showed commitment and resilience to complete the project.
## What we learned
* Time management: Essential for meeting deadlines and maintaining quality.
* Organization: Crucial for high-quality, efficient project completion.
* Team collaboration: Navigating code conflicts and working together effectively for the first time.
* Adaptability: Importance of quickly learning and integrating new technologies.
## What's next for Lingo AI
* Enhanced Features: Integrate more advanced AI capabilities to provide even more accurate feedback on pronunciation, grammar, and fluency.
* Expanded Language Support: Add support for additional languages to help a wider range of users improve their English.
* User Community: Create a community platform for users to practice with each other, share tips, and motivate one another.
* Personalized Learning Paths: Implement AI-driven personalized learning paths tailored to each user’s specific needs and progress.
* Feedback and Iteration: Collect user feedback and continuously iterate on the product to enhance user experience and effectiveness.
* Partnerships: Explore partnerships with educational institutions and language learning organizations to expand reach and impact.
|
partial
|
## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap.
|
## Inspiration
Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis.
## What it does
What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods.
## How we built it
Using Flask, HTML,CSS, and Python for backend.
## Challenges we ran into
We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication.
## Accomplishments that we're proud of
Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life.
## What we learned
How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide.
## What's next for NutriScan
-Working ML
-Use of camera as an input to the program
-Better UI
-Responsive
-Release
|
## Inspiration:
Millions of active software developers use GitHub for managing their software development, however, our team believes that there is a lack of incentive and engagement factor within the platform. As users on GitHub, we are aware of this problem and want to apply a twist that can open doors to a new innovative experience. In addition, we were having thoughts on how we could make GitHub more accessible such as looking into language barriers which we ultimately believed wouldn't be that useful and creative. Our offical final project was inspired by how we would apply something to GitHub but instead llooking into youth going into CS. Combining these ideas together, our team ended up coming up with the idea of DevDuels.
## What It Does:
Introducing DevDuels! A video game located on a website that's goal is making GitHub more entertaining to users. One of our target audiences is the rising younger generation that may struggle with learning to code or enter the coding world due to complications such as lack of motivation or lack of resources found. Keeping in mind the high rate of video gamers in the youth, we created a game that hopes to introduce users to the world of coding (how to improve, open sources, troubleshooting) with a competitive aspect leaving users wanting to use our website beyond the first time. We've applied this to our 3 major features which include our immediate AI feedback to code, Duo Battles, and leaderboard.
In more depth about our features, our AI feedback looks at the given code and analyses it. Very shortly after, it provides a rating out of 10 and comments on what it's good and bad about the code such as the code syntax and conventions.
## How We Built It:
The web app is built using Next.js as the framework. HTML, Tailwind CSS, and JS were the main languages used in the project’s production. We used MongoDB in order to store information such as the user account info, scores, and commits which were pulled from the Github API using octokit. Langchain API was utilised in order to help rate the commit code that users sent to the website while also providing its rationale for said rankings.
## Challenges We Ran Into
The first roadblock our team experienced occurred during ideation. Though we generated multiple problem-solution ideas, our main issue was that the ideas either were too common in hackathons, had little ‘wow’ factor that could captivate judges, or were simply too difficult to implement given the time allotted and team member skillset.
While working on the project in specific, we struggled a lot with getting MongoDB to work alongside the other technologies we wished to utilise (Lanchain, Github API). The frequent problems with getting the backend to work quickly diminished team morale as well. Despite these shortcomings, we consistently worked on our project down to the last minute to produce this final result.
## Accomplishments that We're Proud of:
Our proudest accomplishment is being able to produce a functional game following the ideas we’ve brainstormed. When we were building this project, the more we coded, the less optimistic we got in completing it. This was largely attributed to the sheer amount of error messages and lack of progression we were observing for an extended period of time. Our team was incredibly lucky to have members with such high perseverance which allowed us to continue working on, resolving issues and rewriting code until features worked as intended.
## What We Learned:
DevDuels was the first step into the world hackathons for many of our team members. As such, there was so much learning throughout this project’s production. During the design and ideation process, our members learned a lot about website UI design and Figma. Additionally, we learned HTML, CSS, and JS (Next.js) in building the web app itself. Some members learned APIs such as Langchain while others explored Github API (octokit). All the new hackers navigated the Github website and Git itself (push, pull, merch, branch , etc.)
## What's Next for DevDuels:
DevDuels has a lot of potential to grow and make it more engaging for users. This can be through more additional features added to the game such as weekly / daily goals that users can complete, ability to create accounts, and advance from just a pair connecting with one another in Duo Battles.
Within these 36 hours, we worked hard to work on the main features we believed were the best. These features can be improved with more time and thought put into it. There can be small to large changes ranging from configuring our backend and fixing our user interface.
|
losing
|
## Inspiration
Our inspiration came from the struggles Asians face due to the Coronavirus. Hearing news about asians being attacked just because the Coronavirus originated in an asian country is disheartening to hear about. Racial discrimination was what inspired us to make this website. These kinds of acts occur because people do not understand one another’s lifestyle or perspective. Asians or any other ethnicity are all humans so why can’t they come to an understanding? Well, we hope to teach people about all the other cultures out there so they can understand and relate to others as human beings.
## What it does
The program displays information about points of interest within the world that we live in. Its purpose is to inspire others to want to learn more about different areas in the world as well as to better understand where people come from. In this day and age, racism and ignorance run rampant throughout our society. Through the use of this program, that others will want to.
## How we built it
This web app was built using HTML and CSS with javascript. We used information from articles and images from google to present our ideas. We also used the API provided by one of the sponsors, ArcGIS Online, to show a map of the countries we displayed.
## Challenges we ran into
Some of the challenges that we ran into include how to implement the map API from Esri into our program, learning how to do Javascript, as well as how to style the program. We struggled with understanding the complexities of the Esri map interface due to it being our first time utIlizing an API for a project. Another problem that we encountered was trying to connect the back-end and front-end of the project together. Since a majority of the team was more experienced with front-end development, we decided to make the project mainly front-end. This was also due to the limited amount of time we were allotted to work on this project. The last problem that we faced was the styling of the entire project. Choices involving settling on themes and other aesthetic parts were decisions that were difficult to make during the first stages of production.
## Accomplishments that we're proud of
We accomplished many things throughout the production of this project, but some of the goals that we were most proud of were getting the map API’s to work and the end result of the appearance of the site. We were impressed with how much we were able to complete with the amount of time we had. The end result of the map was our greatest accomplishment with how the information was displayed as well as how the map shows the location. Additionally, the site looks great and the user interface is simple for users with no experience in programming can utilize easily.
## What we learned
While working on the project, we learned many things about programming and each other. All of us were unfamiliar with how to implement an API into a project and now we have a good understanding of how to work with one for future projects. Another topic that we learned more about was Javascript. Although our team was familiar with Java, we did not know much about Javascript. We can utilize our knowledge of javascript now to make new projects with more focus on the back-end. Additionally, we were able to have the experience of working in a team which the three of us did not have much of during school. By working as a team we learned more about teamwork and due to the nature of the project, it allowed us to learn more about each other’s cultures.
## What's next for WorldVIEW
As for the future of WorldVIEW, there are some areas of the website that could use improvement. One of the simple improvements that could be made in the near future would be to add more countries. The structure would be similar to the countries that we have in the project already, but with more information about the points of interest. Another improvement that could be made would be to optimize the project for mobile users. Presently, the website only works for computer users in fullscreen, but we would like to make adjustments for a better mobile experience.
|
## Inspiration
We were driven by a desire to prioritize mental health and saw an opportunity to leverage Hume's emotion analysis API to provide an innovative journaling platform. Music has a profound connection to mental health, and rather than developing a chatbot, we wanted to focus on a service that could offer emotional support. Personalized music recommendations based on journaled emotions could inspire users and help them feel understood and less alone.
## What it does
HeartStrings is a unique platform blending emotion-driven journaling with personalized music recommendations, offering users a deeper connection to their mental and emotional well-being. It uses Hume API to detect and analyze emotion in users' journal entries, tracking the emotional timeline through a calendar feature and providing Spotify playlist recommendations catered to the writer's current feelings.
## How we built it
We utilized Flask and SQLite for our backend, storing journal entries and associated moods. The Hume API was integrated to analyze text journal entries and detect prevalent emotions. For the frontend, we employed HTML and CSS to create a user-friendly interface displaying our features. The Spotify API, combined with keywords generated from detected emotions, allowed us to recommend corresponding playlists.
Our team divided the work as follows:
* Shreya and Akshitaa worked on the frontend and UI, ensuring a seamless user experience.
* Prath handled the integration of the Hume API, focusing on accurate emotion detection.
* Virali managed the Spotify API integration, ensuring relevant playlist recommendations.
## Challenges we ran into
Tech Stack Selection: Deciding on a feasible yet innovative tech stack within a 24-hour period was challenging.
Project Feasibility: Choosing a project topic that was both achievable and unique enough to innovate on existing technology required careful consideration.
UI Design: Creating an appealing and intuitive UI presented design and implementation challenges.
Efficient GitHub Use: Collaborating effectively using GitHub and debugging our code was essential and sometimes problematic.
API Integration: Integrating the APIs smoothly posed significant technical challenges.
Target Audience and Purpose: Defining a clear target audience and purpose for our project was crucial to its success.
## Accomplishments that we're proud of
We take pride in our ability to effectively divide tasks while maintaining collaborative problem-solving when needed to ensure the success of our program. Setting and meeting time constraints for individual tasks was another achievement, along with resolving many merge and Git conflicts through collective effort within our timeframe. Additionally, we are proud of developing this web app and successfully integrating an API that we were previously unfamiliar with into our project, and adding innovative features to expand on its functionality.
## What we learned
Throughout the project, we encountered numerous learning opportunities:
* **API Integration**: We discovered the complexities involved in integrating multiple APIs, particularly the Spotify and Hume APIs.
* **Project Management**: The importance of effective project management and work division became clear as we navigated tight deadlines.
* **Git Workflow**: We became proficient in using more advanced Git commands and workflows, essential for collaboration.
* **UI Design**: Designing an intuitive and appealing UI using CSS and Figma was a significant learning curve.
* **New Tech Stack**: Using Flask and SQLite for the first time introduced us to a new tech stack and broadened our development skills.
* **Adaptability**: We learned that project ideas could evolve rapidly, requiring decisiveness and flexibility.
## What's next for HeartStrings
Looking ahead, our immediate focus includes developing a mobile app to make journaling and music recommendations more accessible on-the-go. We're also excited to introduce audio and video journals, leveraging Hume API's voice emotion and expression analysis for deeper emotional insights. Improving Spotify music recommendations is another priority, refining our algorithm to deliver more specific playlists that resonate with users' current emotional states. We plan to enrich the platform with therapeutic content such as guided meditations, mindfulness exercises, and practical tips for well-being, enhancing its supportive role in users' mental health journeys. To deepen user engagement, we aim to provide advanced analytics for deeper mood insights and trends, allowing users to track their emotional patterns over time. Lastly, user personalization features will be expanded, allowing individuals to create profiles and customize their experience with tailored preferences for music, journaling prompts, and notifications.
|
As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction.
We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask.
We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this.
Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia.
Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix.
|
losing
|
## Inspiration
We all deal with nostalgia. Sometimes we miss our loved ones or places we visited and look back at our pictures. But what if we could revolutionize the way memories are shown? What if we said you can relive your memories and mean it literally?
## What it does
retro.act takes in a user prompt such as "I want uplifting 80s music" and will then use sentiment analysis and Cohere's chat feature to find potential songs out of which the user picks one. Then the user chooses from famous dance videos (such as by Michael Jackson). Finally, we will either let the user choose an image from their past or let our model match images based on the mood of the music and implant the dance moves and music into the image/s.
## How we built it
We used Cohere classify for sentiment analysis and to filter out songs whose mood doesn't match the user's current state. Then we use Cohere's chat and RAG based on the database of filtered songs to identify songs based on the user prompt. We match images to music by first generating a caption of the images using the Azure computer vision API doing a semantic search using KNN and Cohere embeddings and then use Cohere rerank to smooth out the final choices. Finally we make the image come to life by generating a skeleton of the dance moves using OpenCV and Mediapipe and then using a pretrained model to transfer the skeleton to the image.
## Challenges we ran into
This was the most technical project any of us have ever done and we had to overcome huge learning curves. A lot of us were not familiar with some of Cohere's features such as re rank, RAG and embeddings. In addition, generating the skeleton turned out to be very difficult. Apart from simply generating a skeleton using the standard Mediapipe landmarks, we realized we had to customize which landmarks we are connecting to make it a suitable input for the pertained model. Lastly, understanding and being able to use the model was a huge challenge. We had to deal with issues such as dependency errors, lacking a GPU, fixing import statements, deprecated packages.
## Accomplishments that we're proud of
We are incredibly proud of being able to get a very ambitious project done. While it was already difficult to get a skeleton of the dance moves, manipulating the coordinates to fit our pre trained model's specifications was very challenging. Lastly, the amount of experimentation and determination to find a working model that could successfully take in a skeleton and output an "alive" image.
## What we learned
We learned about using media pipe and manipulating a graph of coordinates depending on the output we need, We also learned how to use pre trained weights and run models from open source code. Lastly, we learned about various new Cohere features such as RAG and re rank.
## What's next for retro.act
Expand our database of songs and dance videos to allow for more user options, and get a more accurate algorithm for indexing to classify iterate over/classify the data from the db. We also hope to make the skeleton's motions more smooth for more realistic images. Lastly, this is very ambitious, but we hope to make our own model to transfer skeletons to images instead of using a pretrained one.
|
## Inspiration
We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool.
## What it does
AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures.
The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch.
## How we built it
In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set.
We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features.
## Challenges we ran into
We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time.
It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in.
## Accomplishments that we're proud of
It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected.
## What we learned
All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new!
## What's next for AirTunes
The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
|
## Inspiration
This year's theme of Nostalgia reminded us of our childhoods, reading stories and being so immersed in them. As a result, we created Mememto as a way for us to collectively look back on the past from the retelling of it through thrilling and exciting stories.
## What it does
We created a web application that asks users to input an image, date, and brief description of the memory associated with the provided image. Doing so, users are then given a generated story full of emotions, allowing them to relive the past in a unique and comforting way. Users are also able to connect with others on the platform and even create groups with each other.
## How we built it
Thanks to Taipy and Cohere, we were able to bring this application to life. Taipy supplied both the necessary front-end and back-end components. Additionally, Cohere enabled story generation through natural language processing (NLP) via their POST chat endpoint (<https://api.cohere.ai/v1/chat>).
## Challenges we ran into
Mastering Taipy presented a significant challenge. Due to its novelty, we encountered difficulty freely styling, constrained by its syntax. Setting up virtual environments also posed challenges initially, but ultimately, we successfully learned the proper setup.
## Accomplishments that we're proud of
* We were able to build a web application that functions
* We were able to use Taipy and Cohere to build a functional application
## What we learned
* We were able to learn a lot about the Taipy library, Cohere, and Figma
## What's next for Memento
* Adding login and sign-up
* Improving front-end design
* Adding image processing, able to identify entities within user given image and using that information, along with the brief description of the photo, to produce a more accurate story that resonates with the user
* Saving and storing data
|
winning
|
## Inspiration
On a night in January 2018, at least 7 students reported symptoms of being drugged after attending a fraternity party at Stanford [link](https://abc7news.com/2957171/). Although we are only halfway into this academic year, Stanford has already issued seven campus-wide reports about possible aggravated assault/drugging. This is not just a problem within Stanford, drug-facilitated sexual assault (DFSA) is a serious problem among teens and college students nationwide. Our project is deeply motivated by this saddening situation that people around us at Stanford, and the uneasiness caused by the possibility of experiencing such crimes. This project delivers SafeCup, a sensor-embedded smart cup that warns owners if their drink has been tampered with.
## What it does
SafeCup is embedded with a simple yet highly sensitive electrical conductivity (EC) sensor which detects concentration of total dissolved solids (TDS). Using an auto-ranging resistance measurement system, designed to measure the conductivity of various liquids, the cup takes several measurements within a certain timeframe and warns the owner by pushing a notification to their phone if it senses a drastic change in the concentration of TDS. This change signifies a change in the content of the drink, which can be caused by the addition of chemicals such as drugs.
## How we built it
We used a high surface area electrodes, a set of resistors to build the EC sensor and an Arduino microprocessor to collect the data. The microprocessor then sends the data to a computer, which analyzes the measurements and performs the computation, which then notifies the owner through "pushed", an API that sends push notifications to Android or IOS devices.
## Challenges we ran into
The main challenge is getting a stable and accurate EC reading from the home-made sensor. EC is depended on the surface area and the distance between the electrodes, thus we had to designed an electrode where the distance does between the electrod does not vary due to movements. Liquids can have a large range of conductivity, from 0.005 mS/cm to 5000 mS/cm. In order to measure the conductivity at the lower range, we increased the surface area of our electrodes significantly, around 80 cm^2, while typical commercial TDS sensors are less than 0.5 cm^2. In order to measure such a large range of values, we had to design a dynamic auto-ranging system with a range of reference resistors.
Another challenge is that we are unable to make our cup look more beautiful, or normal/party-like... This is mainly because of the size of the Arduino UNO microprocessor, which is hard to disguise under the size of a normal party solo cup. This is why after several failed cup designs, we decided to make the cup simple and transparent, and focus on demonstrating the technology instead of the aesthetics.
## Accomplishments that we're proud of
We're most proud of the simplicity of the device. The device is made from commonly found items. This also means the device can be very cheap to manufacture. Typical commercial TDS measuring pen can be found for as low as $5 and this device is even simpler than a typical TDS sensor. We are also proud of the auto-ranging resistance measurement. Our cup is able to automatically calibrate to the new drink being poured in, to adjust to its level of resistance (note that different drinks have different chemical compositions and therefore has different resistance). This allows us to make our cup accommodate a wide range of different drinks. We are also proud of finding a simple solution to notify users - developing an app would have take away too much time that we could otherwise put into furthering the cup's hardware design, given a small team of just two first-time hackers.
## What we learned
We learned a lot about Arduino development, circuits, and refreshed our knowledge of Ohm's law.
## What's next for SafeCup
The prototype we've delivered for this project is definitely not a finished product that is ready to be used. We have not performed any test on whether liquids from the cup are actually consumable since the liquids had been in touch with non-food-grade metal and may undergo electrochemical transformation due to the applied potential on the liquid. Our next step would be to ensure consumer safety. TDS sensor also might not be sensitive enough alone for liquids with already high amount of TDS. Adding other simple complementary sensors can greatly increase the sensitive of the device. Other simple sensors may include dielectric constant sensor, turbidity sensor, simple UV-Vis light absorption sensor, or even making simple electrochemical mesurements. Other sensors such as water level sensor can even be used to keep track of amount of drink you have had throughout the night. We would also use a smaller footprint microprocessor, which can greatly compact the device. In addition, we would like to incorporate wireless features that would eliminate the need to wire to a computer.
## Ethical Implications For "Most Ethically Engaged Hack"
We believe that our project could mean a lot to young people facing the risk of DFSA. These people, statistically, mostly consist of college students and teenagers who surround us all the time, and are especially vulnerable to such type of crimes. We have come a long way to show that the idea of using simple TDS sensor for illegal drugging works. With future improvements in its beauty and safety, we believe it could be a viable product that improves safety of many people around us in colleges and parties.
|
## Inspiration
We came into the hackathon knowing we wanted to focus on a project concerning sustainability. While looking at problems, we found that lighting is a huge contributor to energy use, accounting for about 15% of global energy use (DOE, 2015). Our idea for this specific project came from the dark areas in the Pokémon video games, where the player only has limited visibility around them. While we didn't want to have as harsh of a limit on field of view, we wanted to be able to dim the lights in areas that weren't occupied, to save energy. We especially wanted to apply this to large buildings, such as Harvard's SEC, since oftentimes all of the lights are left on despite very few people being in the building. In the end, our solution is able to dynamically track humans, and adjust the lights to "spotlight" occupied areas, while also accounting for ambient light.
## Methodology
Our program takes video feed from an Intel RealSense D435 camera, which gives us both RGB and depth data. With OpenCV, we use Histogram of Oriented Gradients (HOG) feature extraction combined with a Support Vector Machine (SVM) classifier to detect the locations of people in a frame, which we then stitch with the camera's depth data to identify the locations of people relative to the camera’s location. Using user-provided knowledge of the room’s layout, we can then determine the position of the people relative to the room, and then use a custom algorithm to determine the power level for each light source in the room. We can visualize this in our custom simulator, developed to be able to see how multiple lights overlap.
## Ambient Light Sensing
We also implement a sensor system to sense ambient light, to ensure that when a room is brightly lit energy isn't wasted in lighting. Originally, we set up a photoresistor with a SparkFun RedBoard, but after having driver issues with Windows we decided to pivot and use camera feedback from a second camera to detect brightness. To accomplish this we use a 3-step process, within which we first convert the camera's input to grayscale, then apply a box filter to blur the image, and then finally sample random points within the image and average their intensity to get an estimate of brightness. The random sampling boosts our performance significantly, since we're able to run this algorithm far faster than if we sampled every single point's intensity.
## Highlights & Takeaways
One of our group members focused on using the video input to determine people’s location within the room, and the other worked on the algorithm for determining how the lights should be powered, as well as creating a simulator for the output. Given that neither of us had worked on a large-scale software project in a while, and one of us had practically never touched Python before the start of the hackathon, we had our work cut out for us.
Our proudest moment was by far when we finally got our video code working, and finally saw the bounding box appear around the person in front of the camera and the position data started streaming across our terminal. However between calibrating devices, debugging hardware issues, and a few dozen driver installations, we learned the hard way that working with external devices can be quite a challenge.
## Future steps
We've brainstormed a few ways to improve this project going forward: more optimized lighting algorithms could improve energy efficiency, and multiple cameras could be used to detect orientation and predict people's future states. A discrete ambient light sensor module could also be developed, for mounting anywhere the user desires. We also could develop a bulb socket adapter to retrofit existing lighting systems, instead of rebuilding from the ground up.
|
## Inspiration
I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets.
## What it does
There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up.
## How I built it
It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object.
## Challenges I ran into
There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game.
## Accomplishments that I'm proud of
I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch.
## What I learned
I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level.
## What's next for Tactile Leap Motion
Creating a better prototype with better hardware(stronger magnets and more accurate sensors)
|
winning
|
## Inspiration
I've always been inspired by the notion that even as just **one person** you can make a difference. I really took this to heart at DeltaHacks in my attempt to individually create a product that could help individuals struggling with their mental health by providing **actionable and well-studied techniques** in a digestible little Android app. As a previous neuroscientist, my educational background and research in addiction medicine has shown me the incredible need for more accessible tools for addressing mental health as well as the power of simple but elegant solutions to make mental health more approachable. I chose to employ a technique used in Cognitive Behavioral Therapy (CBT), one of (if not the most) well-studied mental health intervention in psychological and medical research. This technique is called automatic negative thought (ANT) records.
Central to CBT is the principle that psychological problems are based, in part, on faulty/unhelpful thinking and behavior patterns. People suffering from psychological problems can learn better ways of coping with them, thereby relieving their symptoms and becoming more effective in their lives.
CBT treatment often involves efforts to change thinking patterns and challenge distorted thinking, thereby enhancing problem-solving and allowing individuals to feel empowered to improve their mental health. CBT automatic negative thought (ANT) records and CBT thought challenging records are widely used by mental health workers to provide a structured way for patients to keep track of their automatic negative thinking and challenge these thoughts to approach their life with greater objectivity and fairness to their well-being.
See more about the widely studied Cognitive Behavioral Therapy at this American Psycological Association link: [link](https://www.apa.org/ptsd-guideline/patients-and-families/cognitive-behavioral)
Given the app's focus on finding objectivity in a sea of negative thinking, I really wanted the UI to be simple and direct. This lead me to take heavy inspiration from a familiar and nostalgic brand recognized for its bold simplicity, objectivity and elegance - "noname". [link](https://www.noname.ca/)
This is how I arrived at **noANTs** - i.e., no (more) automatic negative thoughts
## What it does
**noANTs** is a *simple and elegant* solution to tracking and challenging automatic negative thoughts (ANTs). It combines worksheets from research and clinical practice into a more modern Android application to encourage accessibility of automatic negative thought tracking.
See McGill worksheet which one of many resources which informed some of questions in the app: [link](https://www.mcgill.ca/counselling/files/counselling/thought_record_sheet_0.pdf)
## How I built it
I really wanted to build something that many people would be able to access and an Android application just made the most sense for something where you may need to track your thoughts on the bus, at school, at work or at home.
I challenged myself to utilize the newest technologies Android has to offer, building the app entirely in Jetpack Compose. I had some familiarity using the older Fragment-based navigation in the past but I really wanted to learn how to utilize the Compose Navigation and I can excitedly say I implemented it successfully.
I also used Room, a data persistence library which provided an abstraction layer for the SQLite database I needed to store the thought records which the user generates.
## Challenges I ran into
This is my first ever hackathon and I wanted to challenge myself to build a project alone to truly test my limits in a time crunch. I surely tested them! Designing this app with a strict adherence to NoName's branding meant that I needed to get creative making many custom components from scratch to fit the UI style I was going for. This made even ostensibly simple tasks like creating a slider, incredibly difficult, but rewarding in the end.
I also had far loftier goals with how much I wanted to accomplish, with aspirations of creating a detailed progress screen, an export functionality to share with a therapist/mental-health support worker, editing and deleting and more. I am nevertheless incredibly proud to showcase a functional app that I truly believe could make a significant difference in people's lives and I learned to prioritize creating and MVP which I would love to continue building upon in the future.
## Accomplishments that I'm proud of
I am so proud of the hours of work I put into something I can truly say I am passionate about. There are few things I think should be valued more than an individual's mental health, and knowing that my contribution could make a difference to someone struggling with unhelpful/negative thinking patterns, which I myself often struggle with, makes the sleep deprivation and hours of banging my head against the keyboard eternally worthwhile.
## What I learned
Being under a significant time crunch for DeltaHacks challenged me to be as frugal as possible with my time and design strategies. I think what I found most valuable about both the time crunch, my inexperience in software development, and working solo was that it forced me to come up with the simplest solution possible to a real problem. I think this mentality should be approached more often, especially in tech. There is no doubt a place, and an incredible allure to deeply complex solutions with tons of engineers and technologies, but I think being forced to innovate under constraints like mine reminded me of the work even one person can do to drive positive change.
## What's next for noANTs
I have countless ideas on how to improve the app to be more accessible and helpful to everyone. This would start with my lofty goals as described in the challenge section, but I would also love to extend this app to IOS users as well. I'm itching to learn cross-platform tools like KMM and React Native and I think this would be a welcomed challenge to do so.
|
## Inspiration
Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate.
## What it does
We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal.
## How we built it
Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript
Backend: Python,Javascript
Server side> Nodejs, Passport js
Database> MongoDB( for user login), MySQL(for mood based music recommendations)
## Challenges we ran into
Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked .
But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally.
## Accomplishments that we're proud of
Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions.
We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body
We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor.
Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging.
## What we learned
We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists.
## What's next for Umang
While the core functionality of our app is complete, it can of course be further improved .
1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress.
2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement.
This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
|
## Inspiration
Inspired by mental health needs and the popular app BeReal, we thought it was important for users to have a space to look inwards and reflect on their feelings and support themselves.
## What it does
It prompts users to say how they're doing and complete one self care activity. Once that is completed, we have a large range of other activities available to browse.
## How we built it
We used the Android Firebase hackpack to get started, working in Android Studio with java and xml files. We did everything from mental health research to fullstack development.
## Challenges we ran into
Setting up the necessary tools was a large barrier coming from different platforms. Android Studio was also a learning curve since we are both complete add dev beginners, and had never used any similar IDE.
## Accomplishments that we're proud of
Creating a finished product that's straight-forward yet effective and has the potential to help people much like ourselves.
## What we learned
We learned about the full process of brainstorming ideas, conceptualizing a product, and implementing those ideas into a completed interface.
## What's next for HAY (How Are You?)
We'd love to do more research and include accessible citations for those sources, and make the UI more engaging and easy to use. We'd like to add more tools for users such as goal tracking and achievements for continued self care.
|
partial
|
## Inspiration
* We took inspiration from Rate My Prof to come up with the idea.
## What it does
* **Rate My Study Space** helps students discover and evaluate Study Spaces on Campus
## How we built it
* Front-End Website made using HTML and Bootstrap
* AWS Lambda and DynamoDB to process and store the reviews
## Challenges we ran into
* Some members having limited knowledge of web development
* Brainstorming ideas was a particularly difficult task
* The usual "bugs you can't find the solution to"
## Accomplishments that we're proud of
* We have a ~~barely~~ working website
## What we learned
* Learning basics of web development and introduction to Git and GitHub
* Using AWS Lambda and DynamoDB
## What's next for **Rate My Study Space**
* Feature to allow students to indicate when they are using a certain Study Space
* Be notified when a certain Study Space is freed up
* Tracking the busiest times of the day
* Improvements to the information in reviews
|
## Inspiration
What inspired us to create BusyMap was the frustrations that arose when planning for trips during these socially distant months. A factor that we must weigh heavier nowadays in contrast to pre-COVID19 days is how crowded a destination may be. There were countless times during the past few months where we had to turn back from our desired location due to the lack of ability to socially distance responsibly.
## What it does
BusyMap provides its users with a comprehensive and real-time platform to visually represent the busyness of areas in the form of heatmaps (using a combination of traffic data and hours of operations for places). That is, enabling users to visualize what areas are crowded during what time of day.
## How We built it
BusyMap’s front-end is built with vanilla Javascript/CSS/HTML supported by a Python + Flask backend that connects to GeoTab and Google Maps’ API. Both the website and the supporting backend is hosted on a D2s v3 Microsoft Azure Ubuntu instance.
Front-end back-end communication is established through only RESTful calls. Our program has the potential to become fully stateless and be only running on server-less services.
We made use of GeoTab's UrbanInfrastructures/IdlingAreas dataset as the source of information for our traffic visualizations. Where, we can specify the hour of day and be given the congestion levels of a desired area.
Like mentioned above, we are using Microsoft Azure for hosting the entirety of our project on an Ubuntu instance with NGINX as our reverse-proxy. We see great potential in migrating our app to Azure Functions Serverless Compute to improve scalability and lessen the time needed for maintenance.
## Challenges we ran into
Given that we are retrieving large quantities of data from Geotab, processing and displaying data quickly to create a seamless user experience was important to us.
## What's next for BusyMap
* Add new datasets including COVID-19 metrics and weather
|
## Inspiration
Food waste is a huge issue globally. Overall, we throw out about 1/3 of all of the food we produce ([FAO](https://www.fao.org/3/mb060e/mb060e00.pdf)), and that number is even higher at up to 40% in the U.S. ([Gunders](https://www.nrdc.org/sites/default/files/wasted-food-IP.pdf)). Young adults throw away an even higher proportion of their food than other age groups ([University of Illinois](https://www.sciencedaily.com/releases/2018/08/180822122832.htm)).
All of us have on the team have had problems with buying food and then forgetting about it. It's been especially bad in the last couple of years because the pandemic has pushed us to buy more food less often. The potatoes will be hiding behind some other things and by the time we remember them, they're almost potato plants.
## What it does
Foodpad is an application to help users track what food they have at home and when it needs to be used by. Users simply add their groceries and select how they're planning to store the item (fridge, pantry, freezer), and the app suggests an expiry date. The app even suggests the best storage method for the type of grocery. The items are sorted so that the soonest expiry date is at the top. As the items are used, the user removes them from the list. At any time, the user can access recipes for the ingredients.
## How we built it
We prototyped the application in Figma and built a proof-of-concept version with React JS. We use API calls to the opensourced TheMealDB, which has recipes for given ingredients.
## Challenges we ran into
Only one of us had ever used JavaScript before, so it was tough to figure out how to use that, especially to get it to look nice. None of us had ever used Figma either, and it was tricky at first, but it's a really lovely tool and we'll definitely use it again in the future!
## Accomplishments that we're proud of
* We think it's a really cool idea that would be helpful in our own lives and would also be useful for other people.
* We're all more hardware/backend coders, so we're really proud of the design work that went into this and just pushing ourselves outside of our comfort zones.
## What we learned
* how to prioritize tasks in a project over a very short timeframe for an MVP
* how to code in JS and use React
* how to design an application to look nice
* how to use Figma
## What's next for foodPad
* release it!
* make the application's UI match the design more closely
* expanding the available food options
* giving users the option of multiple recipes for an ingredient
* selecting recipes that use many of the ingredients on the food list
* send push notifications to the user if the product is going to expire in the next day
* if a certain food keeps spoiling, suggest to the user that they should buy less of an item
|
partial
|
## Inspiration
As students, we find the environment of a coffee shop to be calming and friendly. To many, it is also a place to socialize and relax after a stressful day.
## What it does
Explore our coffee shop with a built-in pomodoro timer, talk to other customers on your breaks!
## How we built it
We built brew using Unity and C#.
## Challenges we ran into
This was our first time using Unity and C#, which was definitely a little intimidating, but we learned a lot in the span of a few hours!
## Accomplishments that we're proud of
This was our first hackathon!
## What we learned
the basics of C#, Unity
## What's next for brew
* adding cool visual features such as an 'hourglass' visual for our timer, in the form of a mug with coffee slowly running out of it
* implementing a simple dialogue system to allow for conversations with other customers (NPCs), as well as other mini games and activities that can be completed on breaks
|
## 💡 Inspiration
With the continuous evolution of technology, media consumption has become increasingly immersive - except for books. Despite advancements in other forms of entertainment, reading remains a solitary and unchanged experience. What if you could be transported into the world of the book you’re reading? This question inspired us to create Ambianced.
## What it does ⚙️ 🛠️
Ambianced revolutionizes your reading experience by transforming your surroundings to match the atmosphere of the book you’re reading. It provides synchronized visual scenery and auditory stimuli that align with the text on your page, creating an immersive environment that enhances the narrative and sets the perfect tone for each scene.
## 👨🏼💻👨🏽💻👩🏻💻👨🏼💻 How we built it
We developed Ambianced using Next.js for both the front-end and back-end, ensuring a seamless and responsive user interface. For user authentication, we implemented Auth0, creating an easy-to-use login page. To process the text on the book pages, we utilized the AWS SDK (Textract) for Optical Character Recognition (OCR). The extracted text is then processed by OpenAI’s GPT-4 API to select appropriate Spotify tracks and generate image prompts for DALL-E 3, enhancing both the visual and auditory experience.
## Challenges we ran into 🏋🏻
* **Spotify API Integration**: Many issues with refresh and access tokens.
* **AWS CLI**: Not having AWS CLI set up before, meaning no .aws directory to have the default profile. Also, having to set up the bucket and IAM on the console.
## 💪🏆 Accomplishments that we're proud of
* Setting up AWS CLI, and then setting up AWS SDK
We should be proud of ourselves for working with a lot of tools we've never worked with before. We did a great job in defining the tasks that needed to be done to complete the greater project, using sticky notes to keep us on track. We were the first to complete the scavenger hunt and still managed to get most of the tasks done.
## What we learned 📌🚀
We did an amazing job familiarizing ourselves with and harmonizing powerful tools like AWS, OpenAI, and Spotify, in such a short amount of time. Throughout this project, we gained hands-on experience with some of the most advanced tools available. We learned how to effectively utilize AWS for OCR, integrate OpenAI’s capabilities for text processing, and sync audio using Spotify’s API. This experience has significantly broadened our technical skills and understanding of these powerful platforms.
## 🎯🧱 What's next for Ambianced
* **LED light compatibility** 💡: Integrating LED lighting for a more vibrant experience.
* **Multi-monitor support** 🖥️: Expanding to multiple screens for a broader immersive effect.
* **Eye-tracking technology** 👀: More refined and dynamic scenery and audio.
|
## Inspiration
This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen.
## What it does
It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall.
## How we built it
We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident.
## Challenges we ran into
Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax.
To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were:
1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact).
2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact).
## Accomplishments that we're proud of
We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole.
## What we learned
We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process.
## What's next for VATS
Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
|
losing
|
## Inspiration
You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**.
## What it does
I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~
The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer.
In this prototype, the billboard analyzes the viewer's:
* **Dominant emotion** (from facial expression)
* **Age**
* **Gender**
* **Eye-sight (detects glasses)**
* **Facial hair** (just so that it can remind you that you need a shave)
* **Number of people**
And considers all of these factors to present with targeted ads.
**As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)**
## How I built it
Here is what happens step-by-step:
1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program)
2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result
3. Billboard analyzes the result and decides on which ads to serve (**Python** program)
4. Finalized ads are sent to the Billboard front-end via **Websocket**
5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine)
6. Repeat
## Challenges I ran into
* Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates)
* Putting many pieces of technology together, and ensuring consistency and robustness.
## Accomplishments that I'm proud of
* I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates.
## What's next for Interactive Time Square
* This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~
|
## Inspiration
Deep neural networks are notoriously difficult to understand, making it difficult to iteratively improve models and understand what's working well and what's working poorly.
## What it does
ENNUI allows for fast iteration of neural network architectures. It provides a unique blend of functionality for an expert developer and ease of use for those who are still learning. ENNUI visualizes and allows for modification of neural network architectures. Users are able to construct any (non-recurrent) architecture they please.
We take care of input shapes and sizes by automatically flattening and concatenating inputs where necessary. Furthermore, it provides full access to the underlying implementation in Python / Keras. Neural network training is tracked in real time and can be performed both locally and on the cloud.
## How we built it
We wrote a Javascript frontend with an elegant drag and drop interface. We built a Python back end and used the Keras framework to build and train the user's neural network. To convert from our front end to our back end we serialize to JSON in our own format, which we then parse.
## Challenges we ran into
Sequential models are simple. Models that allow arbitrary branching and merging are less so. Our task was to convert a graph-like structure into functional python code. This required checks to ensure that the shapes and sizes of various tensors matched, which proved challenging when dealing with tensor concatenation.
There are many parameters in a neural network. It was challenging to design an interface which allows the user access to all of them in an intuitive manner.
Integrating with the cloud was challenging, specifically, using Docker and Kubernetes for deployment.
## Accomplishments that we're proud of
A novel and state-of-the-art development and learning tool for deep neural networks. The amount of quality code we produced in 24 hours.
## What's next for Elegant Neural Network User Interface (ENNUI)
We want to add a variety of visualizations both during and after training. We want the colors of each of the layers to change based on how influential the weight are in the network. This will help developers understand the effects of gradient updates and identify sections of the network that should be added to or pruned. Fundamentally, we want to inspire a change in the development style from people waiting until they have a trained model to change parameters or architecture, to watching training and making changes to the network architecture as needed. We're want to add support for frameworks other than Keras, such as Tensorflow, Pytorch, and CNTK. Furthermore, we'd love to see our tool used in an educational environment to help students better understand neural networks.
|
## A bit about our thought process...
If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own.
That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to!
## What does it even do
It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:
**1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!
**2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms
## The Fun Part
Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics.
One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate!
There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created!
## What we learned
We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
|
winning
|
## Inspiration
Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call.
This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies.
## What it does
DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers.
Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene.
Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone.
## How we built it
We developed a comprehensive systems architecture design to visualize the communication flow across different softwares.

We developed DispatchAI using a comprehensive tech stack:
### Frontend:
* Next.js with React for a responsive and dynamic user interface
* TailwindCSS and Shadcn for efficient, customizable styling
* Framer Motion for smooth animations
* Leaflet for interactive maps
### Backend:
* Python for server-side logic
* Twilio for handling calls
* Hume and Hume's EVI for emotion detection and understanding
* Retell for implementing a voice agent
* Google Maps geocoding API and Street View for location services
* Custom-finetuned Mistral model using our proprietary 911 call dataset
* Intel Dev Cloud for model fine-tuning and improved inference
## Challenges we ran into
* Curated a diverse 911 call dataset
* Integrating multiple APIs and services seamlessly
* Fine-tuning the Mistral model to understand and respond appropriately to emergency situations
* Balancing empathy and efficiency in AI responses
## Accomplishments that we're proud of
* Successfully fine-tuned Mistral model for emergency response scenarios
* Developed a custom 911 call dataset for training
* Integrated emotion detection to provide more empathetic responses
## Intel Dev Cloud Hackathon Submission
### Use of Intel Hardware
We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration:
* Leveraged IDC Jupyter Notebooks throughout the development process
* Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform
### Intel AI Tools/Libraries
We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project:
* Utilized Intel® Extension for PyTorch (IPEX) for model optimization
* Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds
* This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools
### Innovation
Our project breaks new ground in emergency response technology:
* Developed the first empathetic, AI-powered dispatcher agent
* Designed to support first responders during resource-constrained situations
* Introduces a novel approach to handling emergency calls with AI assistance
### Technical Complexity
* Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud
* Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI
* Developed real-time call processing capabilities
* Built an interactive operator dashboard for data summarization and oversight
### Design and User Experience
Our design focuses on operational efficiency and user-friendliness:
* Crafted a clean, intuitive UI tailored for experienced operators
* Prioritized comprehensive data visibility for quick decision-making
* Enabled immediate response capabilities for critical situations
* Interactive Operator Map
### Impact
DispatchAI addresses a critical need in emergency services:
* Targets the 82% of understaffed call centers
* Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times)
* Potential to save lives by ensuring every emergency call is answered promptly
### Bonus Points
* Open-sourced our fine-tuned LLM on HuggingFace with a complete model card
(<https://huggingface.co/spikecodes/ai-911-operator>)
+ And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts>
* Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>)
* Promoted the project on Twitter (X) using #HackwithIntel
(<https://x.com/spikecodes/status/1804826856354725941>)
## What we learned
* How to integrate multiple technologies to create a cohesive, functional system
* The potential of AI to augment and improve critical public services
## What's next for Dispatch AI
* Expand the training dataset with more diverse emergency scenarios
* Collaborate with local emergency services for real-world testing and feedback
* Explore future integration
|
## Inspiration
Commuting can be unpleasant, especially early in the morning when you are tired and want to sleep. With a GPS based alarm you can safely sleep on the bus/train knowing you will not miss your stop.
## What it does
GPS based alarm
## How we built it
Smashing through bugs until something came out
## Challenges we ran into
Android development
## Accomplishments that we're proud of
Getting it to work, learning new skills
## What we learned
Android development
## What's next for Snoozy
Amazing things
|
## Inspiration
Energy is the foundation for everyday living. Productivity from the workplace to lifestyle—sleep, nutrition, fitness, social interactions—are dependant on sufficient energy levels required for each activity [1]. Various generalized interventions have been proposed to address energy levels, but currently no method has proposed a personal approach using daily schedules/habits as determinants for energy.
## What it does
Boost AI is an iOS application that uses machine learning to predict energy levels based on daily habits. Simple and user-specific questions on sleep schedule, diet, fitness, social interaction, and current energy level will be used as determinants to predict future energy level. Notifications will give the user personalized recommendations to increase energy throughout the day.
Boost AI allows you to visualize your energy trends over time, including predictions for personalized intervention based on your own lifestyle.
## How we built it
We used MATLAB and TensorFlow for our machine learning framework. The current backend utilizes a support vector machine that is trained on simulated data, based on a subject's "typical" week, with relevant data-augmentation. The linear support vector machine is continually trained with each new user input, and each prediction is based on a moving window, as well as historical daily trends. We have further trained an artificial neural network to make these same predictions, using tensorflow with a keras wrapper. In the future this neural network model will be used to allow for an individual to get accurate predictions with their first use by applying a network trained on a large and diverse set of individuals, then continually fine tuning their personal network to have the best predictions and accurate trends for them. We used Sketch to visualize our iOS application prototype.
## Challenges we ran into
Although we come from the healthcare field, we were limited in domain knowledge in human energy and productivity. We did research on each parameter that is determinant to energy levels.
## Accomplishments that we're proud of
Boost AI is strongly translatable to improving energy in everyday life. We’re proud of the difference it can make to the every day lives of our users.
## What's next for Boost AI
We aim to improve our prototype by training our framework with a real world dataset. We would like to explore two main applications:
**1) Workspace.** Boost AI can be optimized in the workplace by implementing the application into workspace specific softwares. We predict that Boost AI will "boost" energy with specific individual interventions for improved productivity and output.
**2) Healthcare.** Boost AI can be use health based data such as biometric markers and researched questionnaires to predict energy. The data and trends can be used for clinical-driven, intervention and improvements, as well as personal use.
## References:
[1] Arnetz, BB., Broadbridge, CL., Ghosh, S. (2014) Longitudinal determinants of energy levels in knowledge workers. Journal of Occupational Environmental Medicine.
|
winning
|
## Inspiration
We got our inspiration from a video we found on twitter where it showed you how you can read at up to 500 wpm (normal is 300) without having to move your eyes. We decided that we would use AssemblyAI to create something that would allow people to read faster than normal given an audio file.
## What it does
The program takes in a video/audio file, transcribes it to text, then allows the user to speed read it with input in WPM.
## How we built it
We built it using the Tkinter library in python
## Challenges we ran into
Some challenges we ran into were collaborating and working with the API. It is hard to collaborate on code if you are not actually in person with each other. Thus, it was a challenge setting up how we would share our code. We also had some trouble with the API. Because the API requires multiple calls, it makes the code much more complex and caused us great difficulties.
## Accomplishments that we're proud of
We are proud of how we were able to collaborate. We thought it would be hard to easily share our code, but once we set up the github, it was smooth sailing from there.
## What's next for Speed Reader
We hope that we can expand this program into a web application to make it much more accessible
|
## Inspiration
Our inspiration was our experience as university students at the University of Waterloo. During the pandemic, most of our lectures were held online. This resulted in us having several hours of lectures to watch each day. Many of our peers would put videos at 2x speed to get through all the lectures, but we found that this could result in us missing certain details. We wanted to build a website that could help students get through long lectures quickly.
## What it does
Using our website, you can paste the link to most audio and video file types. The website will take the link and provide you with the transcript of the audio/video you sent as well as a summary of that content. The summary includes a title for the audio/video, the synopsis, and the main takeaway.
We chose to include the transcript, because the AI can miss details that you may want to make note of. The transcript allows you to quickly skim through the lecture without needing to watch the entire video. Also, a transcript doesn't include the pauses that happen during a normal lecture, accelerating how fast you can skim!
## How we built it
To start, we created wireframes using Figma. Once we decided on a general layout, we built the website using HTML, CSS, Sass, Bootstrap, and JavaScript. The AssemblyAI Speech-to-Text API handles the processing of the video/audio and returns the information required for the transcript and summary. All files are hosted in our [GitHub repository](https://github.com/ctanamas/HackTheNorth). We deployed our website using Netlify and purchased our domain name from Domain.com. The logo was created in Canva.
## Challenges we ran into
Early on we struggled with learning how to properly use the API. We were not experienced with APIs, and as a result, we found it difficult to get the correct response from the API. Often times when we tried testing our code, we simply got an error from the API. We also struggled with learning how to secure our website while using an API. Learning how to hide the secret key when using an API was something we had never dealt with before.
## Accomplishments that we're proud of
We are proud to have a working demo of our product! We are also proud of the fact that we were able to incorporate an API into our project and make something that we will actually use in our studies! We hope other students can use our product as well!
## What we learned
We learned about how an API works. We learned about how to properly set up a request and how to process the response and incorporate it into our website. We also learned about the process of deploying a website from GitHub. Being able to take plain files and create a website that we can access on any browser was a big step forward for us!
## What's next for notetaker
In the future, we want to add an extension to our summary feature by creating a worksheet for the user as well. The worksheet would replace key words in the summary with blanks to allow the user to test themselves on how well they know the topic. We also wanted to include relevant images to the summary study guide, but were unsure on how that could be done. We want to make our website the ultimate study tool for students on a tight schedule.
|
## Overview
People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak.
You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read.
## How we built it
We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend.
For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate.
## Challenges we ran into
* We had to re-train our models multiple times to get them to work well enough.
* We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute
## Accomplishments that we're proud of
* Using so many tools, languages and frameworks at once, and making them work together :D
* submitting on time (I hope? 😬)
## What's next for SignTube
* Add more signs!
* Use AssemblyAI's real-time API for more streamlined communication
* Incorporate account functionality + storage of videos
|
losing
|
## Inspiration ⚡️
Given the ongoing effects of COVID-19, we know lots of people don't want to spend more time than necessary in a hospital. We wanted to be able to skip a large portion of the waiting process and fill out the forms ahead of time from the comfort of our home so we came up with the solution of HopiBot.
## What it does 📜
HopiBot is an accessible, easy to use chatbot designed to make the process of admitting patients more efficient — transforming basic in person processes to a digital one, saving not only your time, but the time of the doctors and nurses as well. A patient will use the bot to fill out their personal information and once they submit, the bot will use the inputted mobile phone number to send a text message with the current wait time until check in at the nearest hospital to them. As pandemic measures begin to ease, HopiBot will allow hospitals to socially distance non-emergency patients, significantly reducing exposure and time spent around others, as people can enter the hospital at or close to the time of their check in. In addition, this would reduce the potential risks of exposure (of COVID-19 and other transmissible airborne illnesses) to other hospital patients that could be immunocompromised or more vulnerable.
## How we built it 🛠
We built our project using HTML, CSS, JS, Flask, Bootstrap, Twilio API, Google Maps API (Geocoding and Google Places), and SQLAlchemy. HTML, CSS/Bootstrap, and JS were used to create the main interface. Flask was used to create the form functions and SQL database. The Twilio API was used to send messages to the patient after submitting the form. The Google Maps API was used to send a Google Maps link within the text message designating the nearest hospital.
## Challenges we ran into ⛈
* Trying to understand and use Flask for the first time
* How to submit a form and validate at each step without refreshing the page
* Using new APIs
* Understanding how to use an SQL database from Flask
* Breaking down a complex project and building it piece by piece
## Accomplishments that we're proud of 🏅
* Getting the form to work after much deliberation of its execution
* Being able to store and retrieve data from an SQL database for the first time
* Expanding our hackathon portfolio with a completely different project theme
* Finishing the project within a tight time frame
* Using Flask, the Twilio SMS API, and the Google Maps API for the first time
## What we learned 🧠
Through this project, we were able to learn how to break a larger-scale project down into manageable tasks that could be done in a shorter time frame. We also learned how to use Flask, the Twilio API, and the Google Maps API for the first time, considering that it was very new to all of us and this was the first time we used them at all. Finally, we learned a lot about SQL databases made in Flask and how we could store and retrieve data, and even try to present it so that it could be easily read and understood.
## What's next for HopiBot ⏰
* Since we have created the user side, we would like to create a hospital side to the program that can take information from the database and present all the patients to them visually.
* We would like to have a stronger validation system for the form to prevent crashes.
* We would like to implement an algorithm that can more accurately predict a person’s waiting time by accounting for the time it would take to get to the hospital and the time a patient would spend waiting before their turn.
* We would like to create an AI that is able to analyze a patient database and able to predict wait times based on patient volume and appointment type.
* Along with a hospital side, we would like to send update messages that warns patients when they are approaching the time of their check-in.
|
## Inspiration
When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless.
## What it does
* Touchless is an accessible and contact-free solution for gathering form information.
* Allows users to interact with forms using voices and touchless gestures.
* Users use different gestures to answer different questions.
* Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no.
* Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated.
* Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices.
## How we built it
* Gesture and voice components are written in Python.
* The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols.
* SpeechRecognition recognizes user speech
* The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises.
* We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database.
## Challenges we ran into
* Tried to set up a Cerner API for FHIR data, but had difficulty setting it up.
* As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data.
## Accomplishments we’re proud of
This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective.
## What we learned
We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects.
## What’s next for Touchless
In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components.
|
## Inspiration
When Shayan and Alishan came from a different university to attend the deltahacks competition, the first thing that we tried to do was connect them to the wifi so we could get started on our hack, however they had issues connecting to the internet. They were also both over their data plans, so we couldn't hotspot. This catalyzed a thought process about all the times that we need an internet connection but aren't able to access it and we identified a widespread problem with people who need directions but are out of data, and so can't access the internet to find them out. Furthermore, the low income and homeless population may not have the luxury of a data plan, yet need a directions service. This is where Textination came in.
## What it does
Textination acts as a text chatbot that engages in an interactive conversation with the user to get them where they need to go, without the use of the internet. The user first texts Textination their current location and their desired destination, Textination then utilizes a cloud dataset, currently accessed through a google maps API, that finds the optimal route towards your destination and texts you back the directions that you take to get there.
## How we built it
Coded using python via Pythonanywhere. Implemented various API's such as Twilio, Googlemaps, Flask, and Geopy.
## Challenges we ran into
1. One challenge that we ran into is deciding on which language to use in order to manipulate API's. We ended up choosing python because of its ease of use with the data, and the ability to stack API's. Furthermore, It allows Textination to switch out of databases and API's easily which can give users the most up to date and greatest breadth of data
2. Only one of our teammates had substantial coding experience, the rest of us were business students with minimum programming experience. We worked around this by working towards our strengths and dividing up the work to maximize efficiency
## Accomplishments that we're proud of
* Managing to successfully run the flask server and Twilio API working, with no previous experience
* Stacking 4 API's with no previous stacking experience
* Performing a quality needs assessment and detailed research into relevant data; leveraging primary and secondary research sources
## What we learned
* How to successfully bind a server to an API
* How to successfully program a back-end web app
* Effective brainstorming for problems to social issues
# What's next for Textination - SMS Directions Chat Bot
* in the future, we want to use artificial intelligence and machine learning to identify areas that are travelled to more often by the most people, and optimize our program's webscraping for those areas to get information to the user faster
* We will also add transit directions and bus timings into Textination
|
winning
|
## Inspiration
When attending crowded lectures or tutorials, it's fairly difficult to discern content from other ambient noise. What if a streamlined pipeline existed to isolate and amplify vocal audio while transcribing text from audio, and providing general context in the form of images? Is there any way to use this technology to minimize access to education? These were the questions we asked ourselves when deciding the scope of our project.
## The Stack
**Front-end :** react-native
**Back-end :** python, flask, sqlalchemy, sqlite
**AI + Pipelining Tech :** OpenAI, google-speech-recognition, scipy, asteroid, RNNs, NLP
## What it does
We built a mobile app which allows users to record video and applies an AI-powered audio processing pipeline.
**Primary use case:** Hard of hearing aid which:
1. Isolates + amplifies sound from a recorded video (pre-trained RNN model)
2. Transcribes text from isolated audio (google-speech-recognition)
3. Generates NLP context from transcription (NLP model)
4. Generates an associated image to topic being discussed (OpenAI API)
## How we built it
* Frameworked UI on Figma
* Started building UI using react-native
* Researched models for implementation
* Implemented neural networks and APIs
* Testing, Testing, Testing
## Challenges we ran into
Choosing the optimal model for each processing step required careful planning. Algorithim design was also important as responses had to be sent back to the mobile device as fast as possible to improve app usability.
## Accomplishments that we're proud of + What we learned
* Very high accuracy achieved for transcription, NLP context, and .wav isolation
* Efficient UI development
* Effective use of each tem member's strengths
## What's next for murmr
* Improve AI pipeline processing, modifying algorithms to decreasing computation time
* Include multi-processing to return content faster
* Integrate user-interviews to improve usability + generally focus more on usability
|
## Inspiration
Our vision is to revolutionize the way students learn.
## What it does
It is a mobile android application that allows you to transcribe the words and sentences from a person speaking in real time.
## How we built it
Android Studio IDE to build the application and integrate Cloud SpeechToText API in order to build Hear My Prof
## Challenges we ran into
Setting up the entire work environment with Android Studio was long because the network was relatively slow.
## Accomplishments that we're proud of
We are proud of finishing it on time with a beautiful UI
## What we learned
We all learned how to use Android Studio, Kotlin, Git/Github and Cloud API
## What's next for Hear My Prof
* Text to Speech
* Computer vision to allow transcribing written text in a shot of a camera
* Feed API output into a Natural Language Engine for grammar and context analysis.
|
## **Education Track**
## Inspiration
We were brainstorming ideas of how to best fit this hackathon into our schedule and still be prepared for our exams the following Monday. We knew that we wanted something where we could learn new skills but also something useful in our daily lives. Specifically, we wanted to create a tool that could help transcribe our lectures and combine them into easy-to-access formats.
## What it does
This tool allows users to upload video files or links and get a PDF transcript of it. This creates an easy way for users who prefer to get the big ideas to save time and skip the lectures as well as act as a resource for open-note exams.
## How we built it
The website is a react.js application whilst the backend is made up of Firebase for the database, and an express API to access the Google Speech to Text API to help with our transcribing.
## Challenges we ran into
Both of us being novices to using APIs such as Google's we spent quite a bit of time troubleshooting and figuring out how they worked. This left us with less time than desired but we were able to complete a web app that captures the big picture of our app.
## Accomplishments that we're proud of
We are both proud that we were able to learn so many new skills in so little time and are excited to continue this project and create new ones as well. And most importantly we had fun!
## What we learned
We learned quite a lot this weekend.
* Google Cloud APIs
* Firebase
* WikiRaces (How to get from Tigers to Microphones in record time)
* Having Fun!
## What's next for EZNotes
We hope to finish out the app's other features as well as create a summarizing tool using AI so that the notes are even more useful.
|
partial
|
## What it does
Take a picture, get a 3D print of it!
## Challenges we ran into
The 3D printers going poof on the prints.
## How we built it
* AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing.
* MASV to transfer the 3D model files seamlessly.
* RBC reward system to incentivize users to engage more.
* Cohere to edit image prompts to be culturally appropriate for Flux to generate images.
* Groq to automatically edit the 3D models via LLMs.
* VoiceFlow to create an AI agent that guides the user through the product.
|
## Inspiration
Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again?
There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset.
We want to use technology to elevate the world's consciousness around their personal finance.
## What it does
Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life.
It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth.
Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending.
## How we built it
The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE
Then we took it to Figma where we brainstormed and completed design flows for our prototype -
Then we started working on the App-
**Frontend**
* React.
**Backend**
* Authentication: Auth0
* Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase
* Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon
## Challenges we ran into
The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic.
## What we learned
We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data.
## What's next for Where’s my money?
We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc.
Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
|
## Inspiration:
We were inspired by the inconvenience faced by novice artists creating large murals, who struggle to use reference images for guiding their work. It can also help give confidence to young artists who need a confidence boost and are looking for a simple way to replicate references.
## What it does
An **AR** and **CV** based artist's aid that enables easy image tracing and color blocking guides (almost like "paint-by-numbers"!)
It achieves this by allowing the user to upload an image of their choosing, which is then processed into its traceable outlines and dominant colors. These images are then displayed in the real world on a surface of the artist's choosing, such as paper or a wall.
## How we built it
The base for the image processing functionality (edge-detection and color blocking) were **Python, OpenCV, numpy** and the **K-means** grouping algorithm. The image processing module was hosted on **Firebase**.
The end-user experience was driven using **Unity**. The user uploads an image to the app. The image is ported to Firebase, which then returns the generated images. We used the Unity engine along with **ARCore** to implement surface detection and virtually position the images in the real world. The UI was also designed through packages from Unity.
## Challenges we ran into
Our biggest challenge was the experience level of our team with the tech stack we chose to use. Since we were all new to Unity, we faced several bugs along the way and had to slowly learn our way through the project.
## Accomplishments that we're proud of
We are very excited to have demonstrated the accumulation of our image processing knowledge and to make contributions to Git.
## What we learned
We learned that our aptitude lies lower level, in robust languages like C++, as opposed to using pre-built systems to assist development, such as Unity. In the future, we may find easier success building projects to refine our current tech stacks as opposed to expanding them.
## What's next for [AR]t
After Hack the North, we intend to continue the project using C++ as the base for AR, which is more familiar to our team and robust.
|
winning
|
## 💡 INSPIRATION 💡
We wanted to solve a pressing global problem. While we were going through the brainstorming phase, a member on our team read an article titled “Recognizing Fake News Now a Required Subject in California Schools”, which inspired the idea of a gamified app to discern fake news. In today’s digital age, the sheer volume of information at our fingertips can be overwhelming. With the rise of social media and the constant flow of news, it has become increasingly challenging to distinguish between credible information and misinformation. Fake news not only spreads rapidly but also has the potential to influence public opinion, incite unrest, and undermine trust in legitimate sources, ultimately threatening our democracy. We built Blindspot in order to solve this problem by training young adults to distinguish between fake and real news.
## ⚙️ WHAT IT DOES ⚙️
Blindspot is a game that presents the user with a series of news articles-- some articles are fake, others are real. Articles are presented one at a time, and each time, the player's goal is to determine whether the article they are reading is fake or real. As the user advances in this game, fake articles will feel increasingly real, making the game more difficult.
## 🛠️ HOW WE BUILT IT 🛠️
Our frontend is built with Next.js, React, and TypeScript. In the backend, we connected a Python Flask API with OpenAI’s GPT-4o using LangChain to generate fake articles to show to players. We also use the NewsAPI to fetch real articles to provide a mix of real and fake.
## 😣 CHALLENGES WE RAN INTO 😣
It turns out, it's incredibly difficult to find a dataset of high-quality recent news articles. With the recent push in AI, massive proprietary datasets of high-quality content are heavily paywalled, and even APIs are heavily rate-limited. As such, we had to get creative to find content that is convincing.
We also found that modern LLMs are safeguarded pretty heavily, so convincing one to generate high-quality fake news was a challenge. Through much prompt engineering, we were able to get the models to generate very realistic & convincing fake news (scary!), where often times even *our own team* had a <80% accuracy rate!
## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉
Initially, we had several ideas that we wanted to implement, but we were able to combine them in a way that made our final idea (Blindspot) better than any of the initial ones.
Many of our team members were unfamiliar with TypeScript, but we were able to ramp up quickly enough in order to help with the frontend.
Despite the time constraints, we are also proud that we were able to successfully connect the backend to the frontend in a short amount of time.
The game is also addictively fun/hilarious to play! Our team enjoyed plenty of good laughs while building the app.
## 📚 WHAT WE LEARNED 📚
We learned to collaborate effectively as a team. Before this occasion, we were strangers, but within a day, we merged our varied ideas, allocated tasks based on individual strengths, implemented frontend and backend code, and integrated them to develop an MVP.
A few team members were new to LLM models. Through this hackathon, we discovered the remarkable potential and user-friendliness of these models in crafting exceptional products.
## ⏭️ WHATS NEXT ⏭️
Blindspot is set to revolutionize how users engage with news and enhance their media literacy. We plan to introduce daily or weekly challenges to keep users engaged and returning to the app regularly. To further involve our community, we’ll allow users to submit articles they encounter for verification and inclusion in the game, making the experience more interactive and user-driven. A multiplayer mode will enable users to compete in real-time to identify fake news, adding a competitive edge to the learning process. We’ll also provide advanced analytics, giving users detailed insights into their performance and highlighting areas for improvement. Additionally, we’ll include in-depth educational modules on media literacy, the psychology of misinformation, and fact-checking techniques, ensuring users are equipped with the knowledge to navigate the complex media landscape effectively.
|
Live Demo Link: <https://www.youtube.com/live/I5dP9mbnx4M?si=ESRjp7SjMIVj9ACF&t=5959>
## Inspiration
We all fall victim to impulse buying and online shopping sprees... especially in the first few weeks of university. A simple budgeting tool or promising ourselves to spend less just doesn't work anymore. Sometimes we need someone, or someone's, to physically stop us from clicking the BUY NOW button and talk us through our purchase based on our budget and previous spending. By drawing on the courtroom drama of legal battles, we infuse an element of fun and accountability into doing just this.
## What it does
Dime Defender is a Chrome extension built to help you control your online spending to your needs. Whenever the extension detects that you are on a Shopify or Amazon checkout page, it will lock the BUY NOW button and take you to court! You'll be interrupted by two lawyers, the defence attorney explaining why you should steer away from the purchase 😒 and a prosecutor explains why there still are some benefits 😏. By giving you a detailed analysis of whether you should actually buy based on your budget and previous spendings in the month, Dime Defender allows you to make informed decisions by making you consider both sides before a purchase.
The lawyers are powered by VoiceFlow using their dialog manager API as well as Chat-GPT. They have live information regarding the descriptions and prices of the items in your cart, as well as your monthly budget, which can be easily set in the extension. Instead of just saying no, we believe the detailed discussion will allow users to reflect and make genuine changes to their spending patterns while reducing impulse buys.
## How we built it
We created the Dime Defender Chrome extension and frontend using Svelte, Plasma, and Node.js for an interactive and attractive user interface. The Chrome extension then makes calls using AWS API gateways, connecting the extension to AWS lambda serverless functions that process queries out, create outputs, and make secure and protected API calls to both VoiceFlow to source the conversational data and ElevenLabs to get our custom text-to-speech voice recordings. By using a low latency pipeline, with also AWS RDS/EC2 for storage, all our data is quickly captured back to our frontend and displayed to the user through a wonderful interface whenever they attempt to check out on any Shopify or Amazon page.
## Challenges we ran into
Using chrome extensions poses the challenge of making calls to serverless functions effectively and making secure API calls using secret api\_keys. We had to plan a system of lambda functions, API gateways, and code built into VoiceFlow to create a smooth and low latency system to allow the Chrome extension to make the correct API calls without compromising our api\_keys. Additionally, making our VoiceFlow AIs arguing with each other with proper tone was very difficult. Through extensive prompt engineering and thinking, we finally reached a point with an effective and enjoyable user experience. We also faced lots of issues with debugging animation sprites and text-to-speech voiceovers, with audio overlapping and high latency API calls. However, we were able to fix all these problems and present a well-polished final product.
## Accomplishments that we're proud of
Something that we are very proud of is our natural conversation flow within the extension as well as the different lawyers having unique personalities which are quite evident after using our extension. Having your cart cross-examined by 2 AI lawyers is something we believe to be extremely unique, and we hope that users will appreciate it.
## What we learned
We had to create an architecture for our distributed system and learned about connection of various technologies to reap the benefits of each one while using them to cover weaknesses caused by other technologies.
Also.....
Don't eat the 6.8 million Scoville hot sauce if you want to code.
## What's next for Dime Defender
The next thing we want to add to Dime Defender is the ability to work on even more e-commerce and retail sites and go beyond just Shopify and Amazon. We believe that Dime Defender can make a genuine impact helping people curb excessive online shopping tendencies and help people budget better overall.
|
## Inspiration
When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?"
## What it does
*News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well!
## How we built it
First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it!
## Challenges we ran into
We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content.
## Accomplishments that we're proud of
We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact.
## What we learned
We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience!
## What's next for News Report
We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.
|
winning
|
## Inspiration:
Many people may find it difficult to understand the stock market, a complex system where ordinary people can take part in a company's success. This applies to those newly entering the adult world, as well as many others that haven't had the opportunity to learn. We want not only to introduce people to the importance of the stock market, but also to teach them the importance of saving money. When we heard that 44% of Americans have fewer than $400 in emergency savings, we felt compelled to take this mission to heart, with the increasing volatility of the world and of our environment today.
## What it does
With Prophet Profit, ordinary people can invest easily in the stock market. There's only one step - to input the amount of money you wish to invest. Using data and rankings provided by Goldman Sachs, we automatically invest the user's money for them. Users can track their investments in relation to market indicators such as the S&P 500, as well as see their progress toward different goals with physical value, such as being able to purchase an electric generator for times of emergency need.
## How we built it
Our front end is entirely built on HTML and CSS. This is a neat one-page scroller that allows the user to navigate by simply scrolling or using the navigation bar at the top. Our back end is written in JavaScript, integrating many APIs and services.
APIs that we used:
-Goldman Sachs Marquee
-IEX Cloud
Additional Resources:
-Yahoo Finance
## Challenges we ran into
The biggest challenge was the limited scope of the Goldman Sachs Marquee GIR Factor Profile Percentiles Mini API that we wanted to use. Although the data provided was high quality and useful, we had difficulties trying to put together a portfolio with the small amount of data provided. For many of us, it was also our first times using many of the tools and technologies that we employed in our project.
## Accomplishments that we're proud of
We're really, really proud that we were able to finish on time to the best of our abilities!
## What we learned
Through exploring financial APIs deeply, we not only learned about using the APIs, but also more about the financial world as a whole. We're glad to have had this opportunity to learn skills and gain knowledge outside the fields we typically work in.
## What's next for Prophet Profit
We'd love to use data for the entire stock market with present-day numbers instead of the historical data that we were limited to. This would improve our analyses and allow us to make suggestions to users in real-time. If this product were to realize, we'd need the ability to handle and trade with large amounts of money as well.
|
## Inspiration
All three of us working on the project love traveling and want to fly to new places in the world on a budget. By combining Google computer vision to recognize interesting places to go as well as JetBlue's flight deals to find the best flights- we hope we've created a product that college students can use to explore the world!
## What it does
*Envision* parses through each picture on websites the user is on and predicts a destination airport from the image given entities tracked through computer vision. It finds the best JetBlue flight deal based on current location, price, and similarity in destination and returns a hover over chrome extension best deal recommendation. It links to a website that shows more information about the flight, including flight type, fare type, etc, as well as pictures of the destination found through Google Places API. *Envision* travel effortlessly today! :))
## How we built it
We build *Envision* using JetBlue's Deals data and the Google Cloud Platform- Google Vision API, Google Places API. First, we scraped images from Google Image Search of every JetBlue airport location. Running every image through Google Vision API, we received a list of entities that are found in the most common images. By creating a chrome extension to track images in every webpage, each picture is then translate through computer vision into an entity list, and that list is used to find the best location to recommend / most similar destination. Using JetBlue's Deals data, we found the best deal flight from the closest airport based on current location, and routed to our target airport destination, using Google Places API Place Nearby Search, Text Search, Place Details combined to find the most suitable flight to take. Using Heroku and Flask to host our web services, we created a workflow that leads to a React website with more information about the recommended flights and photos of the destination similar to that of the original images on the browsed website.
## Challenges we ran into
There are many steps in our processing pipeline- which includes receiving images from the chrome extension, parsing the image to an entity list on Google Cloud Vision, finding a recommended location and the best JetBlue flight that leads to a region, as well as similar images from the area that links to the original image shown in a separate website. To connect every part together through endpoints took a lot of figuring out near the end!
## Accomplishments that we're proud of
Creating a working product!
## What we learned
Lots of web & API calling.
## What's next for Envision
Creating a more user intuitive interface for *Envision* on the chrome extension as well as the website.
|
## Inspiration
There are a growing number of smart posture sensors being developed to address back and neck pain. These wearable products showcase improved posture during phone use as a key success metric.
The downside to this, though, is the high cost of buying the hardware product and getting used to wearing it for use in daily life. (Will you pay $50 to wear something on your neck/back and remember to put it back on everyday after a shower?)
This got us thinking... What if we could improve posture without an expensive wearable? Why not use the phone itself for both posture recognition and user intervention?
## Our Solution
**Simple Posture detection**
We found that neck angle can be approximated by phone orientation! Most users keep their phone parallel to their face.
**Effortless intervention**
By adjusting screen brightness based on posture, we're able to create a natural feedback loop to improve posture! Users will subconsciously adjust the orientation of their phone to better view content, thus improving their posture.

|
partial
|
## Inspiration
In the rapidly evolving world of healthcare, the need for personalized and efficient data management has never been more crucial. Our team member Vishal can back this up - he held an internship at the BC Children’s hospital for almost an year. One thing that he noticed was that the program used to display the medical data, Cerner, was incredibly inefficient. The program essentially just directly displayed all of the patients raw data, as long and disorganized forms scattered all across, with doctors having to click and search through multiple tabs simply to see the same two separate data points that they always see. Curious about this, our team interviewed some doctors, and they too have complained about the inefficiency of the display of the patient data. It is the bane of new doctors to have to learn to use this system, and much less, get used to it. In fact, Vishal spent almost half of his entire internship at the hospital simply to help create a more specific UI dashboard for doctors to decide sedation amounts.
Because of this, we seek to create a new system to effectively streamline the display and processing of medical information files by allowing doctors to specifically tailor their needs, into the display of files. Using the standardized patient information file format FHIR - used by both the US and Canada - we created FocusFHIR.
## What it does
FocusFHIR is a specialized tool designed for healthcare professionals, enabling them to tailor applications to their specific needs. In the medical field, FHIR format files contain patient data encompassing all the health and medical information of a patient. As previously mentioned, this information is then usually displayed directly using the exact same format, regardless of the speciality of the doctor, leading to a highly convoluted process of the doctor having to manually find the relevant patient data needed in the haystacks of information presented. So much so that many hospitals opted to create their own specialized programs to display the specific information needed, spending much time and resources for development.
However, with FocusFHIR, doctors can now efficiently create customized applications that prioritize the essential information for their specialty, streamlining their workflow without **any** coding knowledge.
For instance, typically, a cardiologist is usually most concerned about information such as blood pressure, and the results of previous heart diagnostics. Usually, the doctor has to constantly go through multiple menus to look for these two data points in between otherwise unessential information, such as glasses prescriptions, or immunization records. With FocusFHIR, the cardiologist can then create their own, more streamlined application dashboard. Simply by selecting the data they need to see, FocusFHIR will then generate its own application to process the FHIR files according only to what the doctor needs, using “SMART on FHIR” app flow security in its data retrieval, to the same standard as industry leaders such as the Apple Smartwatch.
Our platform is not just another healthcare data management system. It’s a tool that puts the power of data customization in the hands of those who know their needs best - the healthcare professionals themselves.
## How we built it
FocusFHIR was crafted using a diverse tech stack to ensure a robust and user-friendly platform. Our frontend development was powered by JavaScript and React, providing an interactive and dynamic user interface. We employed Material-UI for design components and leveraged Tailwind CSS to enhance styling and responsiveness.
On the backend, we utilized Express, a Node.js framework, to handle server-side logic. SQL served as our database management system, enabling efficient storage and retrieval of medical data. AWS played a crucial role in hosting and deploying our application, providing a scalable and reliable infrastructure.
For seamless communication between the frontend and backend, we implemented RESTful APIs, utilizing API Gateway to manage and secure these interactions. The entire design and prototyping process were streamlined using Figma, ensuring a cohesive and visually appealing user experience.
In summary, our development journey involved a fusion of JavaScript, React, SQL, AWS, REST, API Gateway, Tailwind, Material-UI, CSS, HTML, Express, and Figma, allowing us to create FocusFHIR with a balance of functionality and aesthetics.
## Challenges we ran into
One significant challenge we faced was initially struggling to interpret the FHIR file format accurately. However, we persevered and overcame this obstacle through a combination of collaborative efforts, trial and error, and continuous learning. Another challenge was the temptation to change our topic due to uncertainties, but our commitment to the idea prevailed, leading to a successful outcome.
## Accomplishments that we're proud of
We take pride in successfully creating FocusFHIR, a tool that empowers healthcare professionals to customize their applications without any coding knowledge. Our team's ability to stick to the original idea, overcome challenges, and deliver a functional solution is a notable accomplishment. Additionally, we're proud of integrating the SMART on FHIR security measures, ensuring data privacy and compliance with industry standards.
## What we learned
Throughout the development process, our team gained valuable insights into healthcare data management, the FHIR file format, and the importance of adaptability in problem-solving. We learned to navigate challenges collaboratively, enhancing our technical skills and deepening our understanding of the healthcare IT landscape.
## What's next for FocusFHIR
Looking ahead, we plan to further enhance FocusFHIR by incorporating machine learning algorithms to provide predictive analytics. This will enable healthcare professionals to anticipate patient needs and streamline decision-making. We also aim to collaborate with healthcare institutions for real-world testing and refinement, ensuring that FocusFHIR meets the evolving demands of the healthcare industry.
|
## Inspiration
In recent years, the advancement of AI technology has revolutionised the landscape of different sector. Our team is inspired by the popular ChatGPT technology and want to use it to break down the education barriers of kids, hence promoting education equality.
## What it does
KidsPedia is an online encyclopedia for children. Leveraging on the ChatGPT technology of OpenAI, KidsPedia provides simple and easy-to-understand answers with metaphors to kids’ questions. From apples to theory of relativity, KidsPedia explains them all! To enhance searching experiences, KidsPedia includes a read-out function to read out the answers to kids' questions. To fasten the search process, KidsPedia also stores search records in a database.
## How we built it
* React for building frontend user interface
* Express.js for building the backend web server
* PostgreSQL database for data persistence
* Public RESTful API of OpenAI to generate explanation of keywords
* Microsoft Azure Cognitive Services for performing the text-to-speech feature
## Challenges we ran into
* Long response time and uncertainty when calling RESTful API of OpenAI
* CORS issue when connecting frontend to backend
* Installing Docker and Docker compose to local machine
## Accomplishments that we're proud of
* Built a full stack web application that could explain concepts using easy-to-understand wordings
* Used database for caching previous response form OpenAI API, to shorten the loading time when user makes a search on KidsPedia
## What we learned
* Collaborate using Git and GitHub
* Using React hooks when building frontend
* Making HTTP requests using Postman during development stage
* Implement good design of RESTful APIs when building backend
## What's next for KidsPedia
* Develop android & iOS app version of KidsPedia so kids can use it with tablets, which is more user friendly to them compare to using a mouse and keyboard to control
* Use algorithm to suggest relevant concepts, when user search for a keyword on KidsPedia
* Enhance the UI of KidsPedia to be more attractive to kids (e.g. more colorful and animated effects)
|
## Inspiration
While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry.
## What it does
Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance.
## How I built it
In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population.
In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests.
In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C.
## Challenges I ran into
When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience.
## Accomplishments that I'm proud of
For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of.
We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat!
## What I learned
We learned the basics of front-end and back-end development as well as how to make designs more accessible.
## What's next for MediStand
Integrate the various features of this prototype.
How can we make this a global hack?
MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare)
Finding more ways to make this product more accessible
|
losing
|
# Babble: PennAppsXVIII
PennApps Project Fall 2018
## Babble: Offline, Self-Propagating Messaging for Low-Connectivity Areas
Babble is the world's first and only chat platform that is able to be installed, setup, and used 100% offline. This platform has a wide variety of use cases such as use in communities with limited internet access like North Korea, Cuba, and Somalia. Additionally, this platform would be able to maintain communications in disaster situations where internet infrastructure is damaged or sabotaged. ex. Warzones, Natural Disasters, etc.
### Demo Video
See our project in action here: <http://bit.ly/BabbleDemo>
[](http://www.youtube.com/watch?v=M5dz9_pf2pU)
## Offline Install & Setup
Babble (a zipped APK) is able to be sent from one user to another via Android Beam. From there it is able to be installed. This allows any user to install the app just by tapping their phone to that of another user. This can be done 100% offline.
## Offline Send
All Babble users connect to all nearby devices via the creation of a localized mesh network created using the Android Nearby Connections API. This allows for messages to be sent directly from device to device via m to n peer to peer as well as messages to be daisy chain sent from peer to peer to ... to peer to peer.
Each Babble user's device keeps a localized ledger of all messages that it has sent and received, as well as an amalgamation of all of the ledgers of every device that this instance of Babble has been connected directly to via Android Nearby.
The combination of the Android Nearby Connections API with this decentralized, distributed ledger allows for messages to propagate across mesh networks and move between isolated networks as users leave one mesh network and join another.
## Cloud Sync when Online
Whenever an instance of Babble gains internet access, it uploads a copy of its ledger to a MongoDB Atlas Cluster running on Google Cloud. There the local ledger is amalgamated with the global ledger which contains all messages sent world wide. From there the local copy of the ledger is updated from the global copy to contain messages for nearby users.
## Use Cases
### Internet Infrastructure Failure: Natural Disaster
Imagine a natural disaster situation where large scale internet infrastructure is destroyed or otherwise not working correctly. Only a small number of users of the app would be able to distribute the app to all those affected by the outage and allow them to communicate with loved ones and emergency services. Additionally, this would provide a platform by which emergency services would be able to issue public alerts to the entire mesh network.
### Untraceable and Unrestrictable Communication in North Korea
One of the future directions we would like to take this would be a Ethereum-esq blockchain based ledger. This would allow for 100% secure, private, and untraceable messaging. Additionally the Android Nearby Connections API is able to communicate between devices via, cellular network, Wifi, Bluetooth, NFC, and Ultrasound which makes our messages relatively immune to jamming. With the mesh network, it would be difficult to block messaging on a large scale.
As a result of this feature set, Babble would be a perfect app to allow for open and unobstructed, censored, or otherwise unrestricted communication inside of a country with heavily restricted internet access like North Korea.
### Allowing Cubans to Communicate with Family and Friends in the US
Take a use case of a Cuba wide roll out. There will be a limited number of users in large cities like Havana or Santiago de Cuba that will have internet access as well as a number of users distributed across the country who will have occasional internet access. Through both the offline send and the cloud sync, 100% offline users in cuba would be able to communicate with family stateside.
## Future Goals and Directions
Our future goals would be to build better stability and more features such as image and file sharing, emergency messaging, integration with emergency services and the 911 decision tree, end to end encryption, better ledger management, and conversion of ledger to Ethereum-esq anonymized blockchain to allow for 100% secure, private, and untraceable messaging.
Ultimately, the most insane use of our platform would be as a method for rolling out low bandwidth internet to the offline world.
Name creds go to Chris Choi
|
## Inspiration
Have you ever wanted to search something, but aren't connected to the internet? Data plans too expensive, but you really need to figure something out online quick? Us too, and that's why we created an application that allows you to search the internet without being connected.
## What it does
Text your search queries to (705) 710-3709, and the application will text back the results of your query.
Not happy with the first result? Specify a result using the `--result [number]` flag.
Want to save the URL to view your result when you are connected to the internet? Send your query with `--url` to get the url of your result.
Send `--help` to see a list of all the commands.
## How we built it
Built on a **Nodejs** backend, we leverage **Twilio** to send and receive text messages. When receiving a text message, we send this information using **RapidAPI**'s **Bing Search API**.
Our backend is **dockerized** and deployed continuously using **GitHub Actions** onto a **Google Cloud Run** server. Additionally, we make use of **Google Cloud's Secret Manager** to not expose our API Keys to the public.
Internally, we use a domain registered with **domain.com** to point our text messages to our server.
## Challenges we ran into
Our team is very inexperienced with Google Cloud, Docker and GitHub Actions so it was a challenge needing to deploy our app to the internet. We recognized that without deploying, we would could not allow anybody to demo our application.
* There was a lot of configuration with permissions, and service accounts that had a learning curve. Accessing our secrets from our backend, and ensuring that the backend is authenticated to access the secrets was a huge challenge.
We also have varying levels of skill with JavaScript. It was a challenge trying to understand each other's code and collaborating efficiently to get this done.
## Accomplishments that we're proud of
We honestly think that this is a really cool application. It's very practical, and we can't find any solutions like this that exist right now. There was not a moment where we dreaded working on this project.
This is the most well planned project that we've all made for a hackathon. We were always aware how our individual tasks contribute to the to project as a whole. When we felt that we were making an important part of the code, we would pair program together which accelerated our understanding.
Continuously deploying is awesome! Not having to click buttons to deploy our app was really cool, and it really made our testing in production a lot easier. It also reduced a lot of potential user errors when deploying.
## What we learned
Planning is very important in the early stages of a project. We could not have collaborated so well together, and separated the modules that we were coding the way we did without planning.
Hackathons are much more enjoyable when you get a full night sleep :D.
## What's next for NoData
In the future, we would love to use AI to better suit the search results of the client. Some search results have a very large scope right now.
We would also like to have more time to write some tests and have better error handling.
|
## Ark Platform for an IoT powered Local Currency
## Problem:
Many rural communities in America have been underinvested in our modern age. Even urban areas such as Detroit MI, and Scranton PA, have been left behind as their local economies struggle to reach a critical mass from which to grow. This underinvestment has left millions of citizens in a state of economic stagnation with little opportunity for growth.
## Big Picture Solution:
Cryptocurrencies allow us to implement new economic models to empower local communities and spark regional economies. With Ark.io and their Blockchain solutions we implemented a location-specific currency with unique economic models. Using this currency, experiments can be run on a regional scale before being more widely implemented. All without an increase in government debt and with the security of blockchains.
## To Utopia!:
By implementing local currencies in economically depressed areas, we can incentivize investment in the local community, and thus provide more citizens with economic opportunities. As the local economy improves, the currency becomes more valuable, which further spurs growth. The positive feedback could help raise standards of living in areas currently is a state of stagnation.
## Technical Details
\*\* LocalARKCoin (LAC) \*\*
LAC is based off of a fork of the ARK cryptocurrency, with its primary features being its relation to geographical location. Only a specific region can use the currency without fees, and any fees collected are sent back to the region that is being helped economically. The fees are dynamically raised based on the distance from the geographic region in question. All of these rules are implemented within the logic of the blockchain and so cannot by bypassed by individual actors.
\*\* Point of Sale Terminal \*\*
Our proof of concept point of sale terminal consists of the Adafruit Huzzah ESP32 micro-controller board, which has integrated WiFi to connect to the ARK API to verify transactions. The ESP32 connects to a GPS board which allows verification of the location of the transaction, and a NFC breakout board that allows contactless payment with mobile phone cryptocurrency wallets.
\*\* Mobile Wallet App \*\*
In development is a mobile wallet for our local currency which would allow any interested citizen to enter the local cryptocurrency economy. Initiating transactions with other individuals will be simple, and contactless payments allow easy purchases with participating vendors.
|
winning
|
## Inspiration
As some airplanes adopt self-driving systems, some will remain manually controlled. Since all aircraft cannot adopt synchronized self-driving systems simultaneously, we need software to help us transition into this new technology to prevent accidents during taxi. Aircraft Marshalls direct aircraft with hand motions, so we used computer vision to translate these signals into maneuvering instructions.
## What it does
AeroVision lets you control a VIAM rover with just simple hand gestures. It can move forward, move backward, turn right, turn left, and stop. Just like Aircraft Marshalls, it uses standard marshalling signals and converts them to robot output/movement.
## How we built it
We used VIAM's app, API, and Python SDK to make a VIAM rover respond to hand signals. Then, we used an OpenCV model to track our hands on the webcam by each frame. Using the hand tracking model, we can make automated decisions for the VIAM rover to move using its wheels.
## Challenges we ran into
We ran into many challenges during this hackathon, one including the VIAM rover. Since we were new to their system and network for running their machine, we had to adapt to their API and hardware. However, the VIAM team was able to help us every step of the way.
## Accomplishments that we're proud of
Being able to integrate OpenCV with the VIAM rover
## What we learned
How to use the VIAM rover, and implementing their API using the python SDK.
## What's next for AeroVision
Better and more accurate tracking, more features with the OpenCV model. We are thinking about implementing this for Aerovision Pro Max Deluxe.
|
## Inspiration
In a world in which we all have the ability to put on a VR headset and see places we've never seen, search for questions in the back of our mind on Google and see knowledge we have never seen before, and send and receive photos we've never seen before, we wanted to provide a way for the visually impaired to also see as they have never seen before. We take for granted our ability to move around freely in the world. This inspired us to enable others more freedom to do the same. We called it "GuideCam" because like a guide dog, or application is meant to be a companion and a guide to the visually impaired.
## What it does
Guide cam provides an easy to use interface for the visually impaired to ask questions, either through a braille keyboard on their iPhone, or through speaking out loud into a microphone. They can ask questions like "Is there a bottle in front of me?", "How far away is it?", and "Notify me if there is a bottle in front of me" and our application will talk back to them and answer their questions, or notify them when certain objects appear in front of them.
## How we built it
We have python scripts running that continuously take webcam pictures from a laptop every 2 seconds and put this into a bucket. Upon user input like "Is there a bottle in front of me?" either from Braille keyboard input on the iPhone, or through speech (which is processed into text using Google's Speech API), we take the last picture uploaded to the bucket use Google's Vision API to determine if there is a bottle in the picture. Distance calculation is done using the following formula: distance = ( (known width of standard object) x (focal length of camera) ) / (width in pixels of object in picture).
## Challenges we ran into
Trying to find a way to get the iPhone and a separate laptop to communicate was difficult, as well as getting all the separate parts of this working together. We also had to change our ideas on what this app should do many times based on constraints.
## Accomplishments that we're proud of
We are proud that we were able to learn to use Google's ML APIs, and that we were able to get both keyboard Braille and voice input from the user working, as well as both providing image detection AND image distance (for our demo object). We are also proud that we were able to come up with an idea that can help people, and that we were able to work on a project that is important to us because we know that it will help people.
## What we learned
We learned to use Google's ML APIs, how to create iPhone applications, how to get an iPhone and laptop to communicate information, and how to collaborate on a big project and split up the work.
## What's next for GuideCam
We intend to improve the Braille keyboard to include a backspace, as well as making it so there is simultaneous pressing of keys to record 1 letter.
|
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing)
## Inspiration
Without purchasing hardware, there are few ways to have contact-free interactions with your computer.
To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers.
## What it does
Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse.
## How we built it
Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience.
## Challenges we ran into
One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy.
Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge.
## Accomplishments that we're proud of
We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time.
## What we learned
A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs.
Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team.
## What's next for Puppet
Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more.
* Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language.
* Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware.
* Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with
Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above.
* Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
|
partial
|
## Inspiration
In a 2017 survey by Advocates for Youth, only 26 percent of 18 to 29 year olds knew that insurance plans must cover preventive care with no copay or other costs. Latino young people were the least knowledgeable. Usually the transition to young adulthood is accompanied by higher rates of mortality, greater engagement in health-damaging behaviors, and an increase in chronic conditions. Because these health problems are largely preventable, primary care visits can present a key opportunity for improving the health of young adults through preventive screening and intervention, with evidence supporting the efficacy of clinical preventive services. We challenged ourselves with the question "How can design create an experience that makes young people be continually motivated and engaged with their primary care providers?" that led us to build a human-centered solution, making it easy for individuals to identify and access the preventative healthcare services they need.
## What it does
Our app recommends preventative healthcare services specific to the individual, reminds them in a timely manner, and finds nearby healthcare facilities that offer the service and accept the individual's insurance.
## What we learned
Starting out with the understanding of people's contexts, we soon realized to shift from reactive user experience to create proactive and value-based care in order to empower young people's willingness to use the resources our app provides. We wanted to create the conditions for purposeful data gathering (users' health records) to drive new insights and incorporate it with a cycle of actions to benefit users.
## How we built it
Swift iOS app with SwiftUI, JSON parsing, location services, Figma
|
## Inspiration
“Emergency” + “Need” = “EmergeNeed”
Imagine a pleasant warm Autumn evening, and you are all ready to have Thanksgiving dinner with your family. You are having a lovely time, but suddenly you notice a batch of red welts, swollen lips, and itchy throat. Worried and scared, you rush to the hospital just to realize that you will have to wait for another 3 hours to see a doctor due to the excess crowd.
Now imagine that you could quickly talk to a medical professional who could recommend going to urgent care instead to treat your allergic reaction. Or, if you were recommended to seek emergency hospital care, you could see the estimated wait times at different hospitals before you left. Such a system would allow you to get advice from a medical professional quickly, save time waiting for treatment, and decrease your risk of COVID exposure by allowing you to avoid large crowds.
## What it does
Our project aims to address three main areas of healthcare improvement. First, there is no easy way for an individual to know how crowded a hospital will be at a given time. Especially in the current pandemic environment, users would benefit from information such as **crowd level and estimated travel times to different hospitals** near them. Knowing this information would help them avoid unnecessary crowds and the risk of COVID19 exposure and receive faster medical attention and enhanced treatment experience. Additionally, such a system allows hospital staff to operate more effectively and begin triaging earlier since they will receive a heads-up about incoming (non-ambulance) patients before they arrive.
Second, online information is often unreliable, and specific demographics may not have access to a primary care provider to ask for advice during an emergency. Our interface allows users to access **on-call tele-network services specific to their symptoms** easily and therefore receive advice about options such as monitoring at home, urgent care, or an emergency hospital.
Third, not knowing what to expect contributes to the elevated stress levels surrounding an emergency. Having an app service that encourages users to **actively engage in health monitoring** and providing **tips about what to expect** and how to prepare in an emergency will make users better equipped to handle these situations when they occur. Our dashboard offers tools such as a check-in journal to log their mood gratitudes and vent about frustrations. The entries are sent for sentiment analysis to help monitor mental states and offer support. Additionally, the dashboard allows providers to assign goals to patients and monitor progress (for example, taking antibiotics every day for 1 week or not smoking). Furthermore, the user can track upcoming medical appointments and access key medical data quickly (COVID19 vaccination card, immunization forms, health insurance).
## How we built it
Our application consists of a main front end and a backend.
The front end was built using the Bubble.io interface. Within the Bubble service, we set up a database to store user profile information, create emergency events, and accumulate user inputs and goals. The Bubble Design tab and connection to various API’s allowed us to develop different pages to represent the functionalities and tools we needed. For example, we had a user login page, voice recording and symptom input page, emergency event trigger with dynamic map page, and dashboard with journaling and calendar schedule page. The Bubble Workflow tab allowed us to easily connect these pages and communicate information between the front and back end.
The back end was built using Python Flask. We also used Dialogflow to map the symptoms with the doctor's speciality the user should visit. We processed data calls to InterSystems API in the backend server and processed data from the front end. We created synthetic data to test on.
## Challenges we ran into
This project was a great learning experience, and we had a lot of fun (and frustration) working through many challenges. First, we needed to spend time coming up with a project idea and then refining the scope of our idea. To do this, we talked with various sponsors and mentors to get feedback on our proposal and learn about the industry and actual needs of patients. Once we had a good roadmap for what features we wanted, we had to find data that we could use. Currently, hospitals are not required to provide any information about estimated wait time, so we had to find an alternative way to assess this. We decided to address this by developing our own heuristic that considers hospital distance, number of beds, and historic traffic estimation. This is a core functionality of our project, but also the most difficult, and we are still working on optimizing this metric. Another significant challenge we ran into was learning how to use the Bubble service, explicitly setting up the google maps functionality we wanted and connecting the backend with the frontend through Bubbles API. We sought mentor help, and are still trying to debug this step. Another ongoing challenge is implementing the call a doc feature with Twilio API. Finally, our team consists of members from drastically different time zones. So we needed to be proactive about scheduling meetings and communicating progress and tasks.
## Accomplishments that we're proud of
We are proud of our idea - indeed the amount of passion put into developing this toolkit to solve a meaningful problem is something very special (Thank you TreeHacks!).
We are proud of the technical complexity we accomplished in this short time frame. Our project idea seemed very complex, with lots of features we wanted to add.
Collaboration with team mates from different parts of the world and integration of different API’s (Bubble, Google Maps, InterSystems)
## What we learned
We learned a lot about the integration of multiple frameworks. Being a newbie in web development and making an impactful application was one of the things that we are proud of. Most importantly, the research and problem identification were the most exciting part of the whole project. We got to know the possible shortcomings of our present-day healthcare systems and how we can improve them. Coming to the technical part, we learned Bubble, Web Scraping, NLP, integrating with InterSystems API, Dialogflow, Flask.
## What's next for EmergeNeed
We could not fully integrate our backend to our Frontend web application built on Bubble as we faced some technical difficulties at the end that we didn’t expect. The calling feature needs to be implemented fully (currently it just records user audio). We look to make EmergeNeed a full-fledged customer-friendly application. We plan to implement our whole algorithm (ranging from finding hospitals with proper machines and less commute time to integrating real-time speech to text recognition) for large datasets.
|
## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
|
partial
|
## Inspiration
As with many drivers, when a major incident occurs while driving, we are often left afraid, anxious, and overwhelmed. Just as many of our peers, we had little experience behind the wheel and barely understood how insurance claims worked and what steps we should be taking if an accident occurs. We decided to innovate the process of filing insurance claims for people of all ages and diverse backgrounds to allow for a quicker, accessible, and user-friendly experience through SWIFT DETECT.
## What it does
SWIFT DETECT is an app that utilizes machine learning to extract information from user fed pictures and environmental context to auto-fill an insurance claims form. The automated process gives the user an informed and step-by-step guide on steps to take collision. The machine learning software can also make informed decisions on whether to contact emergency services, towing services, or whether the user will need a temporary vehicle based off of the picture evidence the user submits. This automated process allows the user to gain control over the situation and get back on track with their day-to-day activities faster than the traditional methods practiced.
## How we built it
SWIFT DETECT was made using Node.JS and CARSXE ML API.
## Challenges we ran into
Initially, we had tried creating our own ML model however we faced issues gathering datasets to train our model with. We thus utilized the pre-existing CARSXE ML API. However, this API proved to be very challenging to use.
## Accomplishments that we're proud of
We are proud to have utilized our knowledge of tech to engineer a meaningful product that impacts our society in a positive way. We are proud to have engineered a product that caters to a diverse group of end-users and ultimately puts the user first.
## What we learned
Through the process of planning and executing our hack, we have learned a lot about the insurance industry and ML models.
## What's next for SWIFT DETECT
SWIFT DETECT hopes to take a preventative approach when it comes to vehicle collisions. We will do so by becoming the primary source of information when it comes to your vehicle's health and longevity.
We aim to reduce the amount of collisions by analyzing a car’s mechanical parts and alerting the user when it is time for replacement or repair. Through the use of smart car features, we want to deliver rapid and accurate results of the current status of your vehicle.
|
## Inspiration
Approximately 107.4 million Americans choose walking as a regular mode of travel for both social and work purposes. In 2015, about 70,000 pedestrians were injured in motor vehicle accidents while over 5,300 resulted in fatalities. Catastrophic accidents as such are usually caused by negligence or inattentiveness from the driver.
With the help of **Computer** **Vision** and **Machine** **Learning**, we created a tool that assists the driver when it comes to maintaining attention and being aware of his/her surroundings and any nearby pedestrians. Our goal is to create a product that provides social good and potentially save lives.
## What it does
We created **SurroundWatch** which assists with detecting nearby pedestrians and notifying the driver. The driver can choose to attach his/her phone to the dashboard, click start on the simple web application and **SurroundWatch** processes the live video feed sending notifications to the driver in the form of audio or visual cues when he/she is in danger of hitting a pedestrian. Since we designed it as an API, it can be incorporated into various ridesharing and navigation applications such as Uber and Google Maps.
## How we built it
Object detection and image processing was done using **OpenCV** and **YOLO-9000**. A web app that can run on both Android and iOS was built using **React**, **JavaScript**, and **Expo.io**. For the backend, **Flask** and **Heroku** was used. **Node.js** was used as the realtime environment.
## Challenges we ran into
We struggled with getting the backend and frontend to transmit information to one another along with converting the images to base64 to send as a POST request. We encountered a few hiccups in terms of node.js, ubuntu and react crashes, but we're successfully able to resolve them. Being able to stream live video feed was difficult given the limited bandwith, therefore, we resulted to sending images every 1000 ms.
## Accomplishments that we're proud of
We were able to process and detect images using YOLO-9000 and OpenCV, send image information using the React app and communicate between the front end and the Heroku/Flask backend components of our project. However, we are most excited to have built and shipped meaningful code that is meant to provide social good and potentially save lives.
## What we learned
We learned the basics of creating dynamic web apps using React and Expo along with passing information to a server where processing can take place. Our team work and hacking skills definitely improved and have made us more adept at building software products.
## What's next for SurroundWatch
Next step for SurroundWatch would be to offset the processing to AWS or Google Cloud Platform to improve speed of real-time image processing. We'd also like to create a demo site to allow users to see the power of SurroundWatch. Further improvements include improving our backend, setting up real-time image processing for live video streams over AWS or Google Cloud Platform.
|
## Inspiration
As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities.
## What it does
FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless.
## How we built it
We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's.
## Challenges we ran into
IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url.
## Accomplishments that we're proud of
We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement.
## What's next for FaceLyft
While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases.
|
losing
|
## Inspiration
*"A correct diagnosis is three-fourths the remedy." - Mahatma Gandhi*
In this fast-paced world where everything seems to be conveniently accessed in a matter of seconds at our fingertips with our smartphones and laptops, some parts of our lives can not be replaced or compromised. Let's not kid ourselves, we are all *guilty* of getting a scare when we see something suspicious on our skin or if we feel funny, we fall into the black hole of googling the symptoms, believing everything we read and scaring ourselves to an unnecessary extent.
Even forty-four percent of Americans prefer to self-diagnose their illness online rather than see a medical professional, according to a survey conducted by *The Tinker Law Firm*. That is an alarmingly large amount of people for just a country.
While it is cheaper to go to Google to self-diagnose rather than to visit a doctor, this often leads to inaccurate diagnosis and can be extremely dangerous as they might follow a wrong treatment plan or may not realize the severity of their condition.
Through our personal experiences in Asian countries, it was common to get an X-Ray scan at one place, and then another appointment with a doctor had to be booked the next day to receive an opinion. We also wanted to create a way to avoid inconvenience for some people and make it socially sustainable this way. Especially with the exponentially rising cases of damaging effects on the environment, we wanted to create a means of a sustainable health care system while reducing the negative impacts.
## What it does
**Doctorize.AI** is an easy-to-use web application that uses Machine Learning to scan the images or audio clip uploaded, and with a simple click of a button, it is processed, and the results inform if there are any concerning medical issues recognized or if the x-ray is clear. It also lets you know if you must seek immediate medical attention and connects you to a matching specialist to help you out. Worried about something in general? Use the “Request A Doctor” to connect and talk all your worries out.
An added **bonus**: Patients and doctors can use Doctorize.AI as an extra tool to get an instantaneous second opinion and avoid any false negative/positive results, further reducing the load of the healthcare system, making this web application socially sustainable. It is also a safe and low-carbon health system, protecting the environment.
Our models are able to **recognize and respond** to cases by classifying:
**-** skin cancer (Malignant or Benign)
**-** brain tumor (Glioma\_Tumor, Meningioma\_Tumor, Pituitary\_Tumor, No\_Tumor)
**-** X-ray (Tuberculosis, Pneumonia, COVID-19 induced Pneumonia, or Normal)
## How we built it
The **frontend** was built using:
**-** Next.js
**-** HTML
**-** CSS
**-** JavaScript
The **backend** was built using:
**-** Flask
**-** Python
**-** TensorFlow/Keras for the Deep learning models to classify images/audio
**-** AWS S3 for storage of large data set
## Challenges we ran into
As four individuals came together, we were bursting with uncountable ideas, so it took a long discussion or two to settle and choose what we could realistically achieve in a span of 36 hours.
Here are a few challenges we ran into:
**-** Lack of dataset availability
**-** Different time-zones
**-** Mix between first time hacker, new hacker(s), and experienced hacker in the team
**-** AWS S3 - Simple Storage Service
**-** Storage of large data
**-** AWS Sagemaker
**-** Computational power - deep learning takes time
## Accomplishments that we're proud of
**-** Being able to tackle and develop the **Machine Learning Models** with the supportive team we had.
**-** Creating a successful clean and polished look to the design
**-** Models with over 80% accuracy across the board
**-** Figuring out how to implement Flask
**-** Experimenting with AWS (S3, and Sagemaker (not as successful))
## What we learned
**-** Together as a team, we learnt how to use and apply CSS in an efficient way and how different CSS tools helped to achieve certain looks we were aiming for.
**-** We also learned how to use Flask to connect ML models to our web application.
**-** Further, we learned how to use AWS (S3, and Sagemaker (not as successful)).
## What's next for Doctorize.AI
**-** Allow patients and doctors to interact smoothly on the platform
**-** Expand our collection of medical cases that can be scanned and recognized such as more types of bacteria/viruses and rashes
**-** Bring in new helpful features such as advanced search of specialists and general doctors in the area of your own choice
**-** Record the patient’s history and information for future references
**-** QR codes on patient’s profile for smoother connectivity
**-** Voice Memo AI to summarize what the patient is talking about into targeted key topics
|
## Inspiration
When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body.
As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately.
## What it does
The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals.
## How we built it
Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality.
## Challenges we ran into
The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave.
## Accomplishments that we're proud of
We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying.
## What we learned
One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people.
## What's next for SkinSkan
We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
|
## Inspiration
Too many impersonal doctor's office experiences, combined with the love of technology and a desire to aid the healthcare industry.
## What it does
Takes a conversation between a patient and a doctor and analyzes all symptoms mentioned in the conversation to improve diagnosis. Ensures the doctor will not have to transcribe the interaction and can focus on the patient for more accurate, timely and personal care.
## How we built it
Ruby on Rails for the structure with a little bit of React. Bayesian Classification procedures for the natural language processing.
## Challenges we ran into
Working in a noisy environment was difficult considering the audio data that we needed to process repeatedly to test our project.
## Accomplishments that we're proud of
Getting keywords, including negatives, to match up in our natural language processor.
## What we learned
How difficult natural language processing is and all of the minute challenges with a machine understanding humans.
## What's next for Pegasus
Turning it into a virtual doctor that can predict illnesses using machine learning and experience with human doctors.
|
winning
|
## Inspiration
Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions.
While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care.
## What it does
Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity.
## How we built it
This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards.
## Challenges we ran into
Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding.
Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself.
## Accomplishments that we're proud of
Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief.
## What we learned
Among many things:
The complexity and difficulty of implementing mechanical systems
How to adjust mechatronics design parameters
Usage of Azure SQL and WordPress for dynamic user pages
Use of the Houndify API and custom commands
Raspberry Pi audio streams
## What's next for Medley
One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem.
|
## Inspiration
<https://www.youtube.com/watch?v=lxuOxQzDN3Y>
Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie.
## What it does
We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications.
## How I built it
The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer
## Challenges I ran into
Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key.
## Accomplishments that I'm proud of
We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API.
## What I learned
We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise.
## What's next for Speech Computer Control
At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
|
## Inspiration
We noticed two big problems in the medical field. The first is the annoying part of being a doctor - inputting data into Electronic Health Records (EHRs). Studies show that physicians spend two-thirds of their time on the job not interacting with patients, but just staring at computer screens. This can lead to physician demoralization and burnout. We wanted to change that.
We also looked at the data on following checklists. We learned that sticking to a pre-made order of tasks during a checkup leads to far fewer mistakes on the part of the physician and dramatically helps patients around the world. We wanted to increase physician accountability to these kinds of checklists.
Enter CheckHealth.
## What it does
CheckHealth acts as a digital assistant for doctors during patient visits. Our program is in the background of your general checkup, running unobtrusively on your physician's computer. It listens for key commands that correspond to observations the doctor is making - e.g. pulse and blood pressure. If a doctor misses a step, CheckHealth asks whether he/she would like to cover the missing steps. It then takes all of the relevant information it's collecting and compiles it into a format easily integrated into all of the most common EHR systems. No more time wasted staring at computer screens for doctors! And no more wondering if patients are receiving comprehensive care! CheckHealth handles it all.
## How we built it
We used Houndify API to handle the speech-to-text and a lot of the command parsing, which forms the core of our functionality. We also used a Python backend to record audio, take in relevant patient information, and output a .csv file to be used by any primary healthcare provider EHRs. The end deliverable is a terminal-level Python program that assists physicians during general checkups.
## Challenges we ran into
Houndify API definitely had a learning curve and we struggled with sifting through the documentation and figuring out how the specifications would fit with our vision. We also considered writing to an open-source EHR, but because of the technical complexity along with the ultimate lack of ultimate interoperability, we decided against it.
## Accomplishments that we're proud of
We're really, really happy that we got Houndify API working and our core speech-to-text functionality up and running. We also love that we were able to create a .csv file that basically acts as an activate temporary patient record, which allowed our system to have long-term data persistence.
## What we learned
Of course, we acquired lots of technical skills; half of our team has never taken a formal CS class! We learned key skills in project management and delegation. But, most importantly, we learned that we're much stronger together than we are alone.
## What's next for CheckHealth?
We want to integrate with Redox, a web application that shuttles patient information between EHR systems. Becoming a Redox Node means integrability with the vast number of healthcare databases in the Redox system. We also want to see if we can activate our speech-to-text commands sonically rather than manually, so we can make CheckHealth even more frictionless than it already is. We're also considering building out more functionality and improving UX.
|
winning
|
## Inspiration
We recognized how much time meal planning can cause, especially for busy young professionals and students who have little experience cooking. We wanted to provide an easy way to buy healthy, sustainable meals for the week, without compromising the budget or harming the environment.
## What it does
Similar to services like "Hello Fresh", this is a webapp for finding recipes and delivering the ingredients to your house. This is where the similarities end, however. Instead of shipping the ingredients to you directly, our app makes use of local grocery delivery services, such as the one provided by Loblaws. The advantages to this are two-fold: first, it helps keep the price down, as your main fee is for the groceries themselves, instead of paying large amounts in fees to a meal kit company. Second, this is more eco-friendly. Meal kit companies traditionally repackage the ingredients in house into single-use plastic packaging, before shipping it to the user, along with large coolers and ice packs which mostly are never re-used. Our app adds no additional packaging beyond that the groceries initially come in.
## How We built it
We made a web app, with the client side code written using React. The server was written in python using Flask, and was hosted on the cloud using Google App Engine. We used MongoDB Atlas, also hosted on Google Cloud.
On the server, we used the Spoonacular API to search for recipes, and Instacart for the grocery delivery.
## Challenges we ran into
The Instacart API is not publicly available, and there are no public API's for grocery delivery, so we had to reverse engineer this API to allow us to add things to the cart. The Spoonacular API was down for about 4 hours on Saturday evening, during which time we almost entirely switched over to a less functional API, before it came back online and we switched back.
## Accomplishments that we're proud of
Created a functional prototype capable of facilitating the order of recipes through Instacart. Learning new skills, like Flask, Google Cloud and for some of the team React.
## What we've learned
How to reverse engineer an API, using Python as a web server with Flask, Google Cloud, new API's, MongoDB
## What's next for Fiscal Fresh
Add additional functionality on the client side, such as browsing by popular recipes
|
## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
|
## Critical notes:
* Website MUST be viewed between 67% and 75% on any Mac or Windows laptop
[Website](https://vapor-protocol.webflow.io/)
## What inspired you?
* We were inspired to build this project to solve one of the most core problems experienced by native and non-native Crypto users. Needing native gas to process transactions on a new chain is a nightmare to acquire and necessitates a centralized exchange that the user can interact with (with all the pain points of CEXs like KYC and delays). We’ve figured out a way to use Zetachain’s cross-chain swaps to be able to send gas to any chain as long as you have some tokens on one chain. This way, users don’t have to route through CEX and acquire gas without touching a custodial actor.
## If we had more time
* One of the best ways to improve our product would be to incorporate fiat onramps so users don’t have to have any existing tokens on any chain to split gas across chains. Integrating Stripe or Ramp Network would be simple to do but we ran out of time to look into it. This feature would make our product fully abstracted for the Web3 newbie, allowing them to start with gas on any chain they wanted using the fiat money they already possess.
## How we built our project
* On the backend, we implemented a Solidity smart contract that conducted cross-chain swaps. This contract would first take the native token on the first chain and use the Uniswap quoter and router to swap it into ZETA tokens on the first chain. Then, we would use Zetachain’s cross chain messaging Connector API to send the ZETA tokens to a second chain. On receiving these tokens on the second chain, the smart contract would deconstruct the message, then use the Uniswap quoter and router again to swap the ZETA tokens back into the desired native token on the second chain.
* On the frontend, we used a combination of Webflow and React. We used Webflow to make a landing page that described many of the features of our project as well as a sample user flow. For the actual smart contract interaction, we linked the Webflow landing page to a React page that uses web3.js to interact with the smart contracts.
* Setting up the smart contracts was quite involved. First, we had to deploy this contract onto every single chain that we wanted to potentially swap tokens into. Then, we had to call the setInteractorByChainId function so that each contract would know where its counterparts on other chains lived. After all of this was set up, swaps were ready to be made by the user.
## Challenges we faced
* This was the first time any of us had dealt with programming on the blockchain across chains. Thinking in terms of sending messages across chains was quite challenging but also very rewarding once we understood the programming paradigms.
* Our biggest difficulty was the transaction not succeeding on the other side of the bridge. While we have a few guesses as to why that may be, we had difficulty navigating the block explorer and the documentation, so we were unable to understand exactly why transactions were failing.
|
winning
|
## Inspiration
Love is in the air. PennApps is not just about coding, it’s also about having fun hacking! Meeting new friends! Great food! PING PONG <3!
## What it does
When you navigate to any browser it will remind you about how great PennApps was!
|
## Inspiration
As students with busy lives, it's difficult to remember to water your plants especially when you're constantly thinking about more important matters. So as solution we thought it would be best to have an app that centralizes, monitors, and notifies users on the health of your plants.
## What it does
The system is setup with 2 main components hardware and software. On the hardware side, we have multiple sensors placed around the plant that provide input on various parameters (i.e. moisture, temperature, etc.). Once extracted, the data is then relayed to an online database (in our case Google Firebase) where it's then taken from our front end system; an android app. The app currently allows user authentication and the ability to add and delete plants.
## How we built it
**The Hardware**:
The Hardware setup for this hack was reiterated multiple times through the hacking phase due to setbacks of the hardware given. Originally we planned on using the Dragonboard 410c as a central hub for all the sensory input before transmitting it via wifi. However, the Dragonboard taken by the hardware lab had a corrupted version of windows iot which meant we had to flash the entire device before starting. After flashing, we learned that dragonboards (and raspberry Pi's) lack the support for analog input meaning the circuit required some sort of ADC (analog to digital converter). Afterwards, we decided to use the ESP-8266 wifi boards to send data as it better reflected the form factor of a realistic prototype and because the board itself supports analog input. In addition we used an Arduino UNO to power the moisture sensor because it required 5V and the esp outputs 3.3V (Arduino acts as a 5v regulator).
**The Software**:
The app was made in Android studios and was built with user interaction in mind by having users authenticate themselves and add their corresponding plants which in the future would each have sensors. The app is built with scalability in mind as it uses Google Firebase for user authentication, sensor datalogging,
## Challenges we ran into
The lack of support for the Dragonboard left us with many setbacks; endless boot cycles, lack of IO support, flashing multiple OSs on the device. What put us off the most was having people tell us not to use it because of its difficultly. However, we still wanted to incorporate in some way.
## Accomplishments that we're proud of
* Flashing the Dragonboard and booting it with Windows IOT core
* a working hardware/software setup that tracks the life of a plant using sensory input.
## What we learned
* learned how to program the dragonboard (in both linux and windows)
* learning how to incorporate Firebase into our hack
## What's next for Dew Drop
* Take it to the garden world where users can track multiple plants at once and even support a self watering system
|
## Inspiration
Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills.
## What it does and how we built it
TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance.
## Challenges we ran into
Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques.
## Accomplishments that we're proud of
We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team.
## What we learned
Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users.
## What's next for TRACY
Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
|
partial
|
## Inspiration
To overcome accessibility issues and delay with prescription medication. Specifically, targeting those who are unable to receive their medication and cannot wait to receive their medication through the mail.
## How I built it
Primarily using React and Google Cloud APIs
## Challenges we ran into
We attempted to use MongoDB and Microsoft Azure but we realized that it would be simpler and more efficient to use one platform and minimize the business rule of our project. Thus, we moved to using Google Cloud API for storage as well as OpenCV. Another challenge was learning Javascript for the first time, and implementing it in conjunction with Google Firebase and the Google Maps API. The main problem was that detecting nearby pharmacies and displaying their markers; however, this was eventually determined to be a simple string error.
## Accomplishments that I'm proud of
Overcoming our challenges and creating a final project that can make an impact in the world!
## What I learned
We learned how to use Firebase, React, and how to implement APIs. Also, we learned to fix bugs and use technology we had never experimented with before.
|
## Inspiration
We were inspired by our grandparents where picking up medication can be a struggle for them. From travelling to long line waits. We wanted a platform or tool that eases the process not only for patients but also has a systemized flow that can also help doctors and pharmaceuticals streamline their process.
## What it does
After getting checked up by a doctor, they can send your prescription straight to the patient's preferred pharmaceutical. A trusted driver can then pick it up and deliver to the patient's address.
## How we built it
For this web application, we used React.js for the component base since it's easy to manage states for a dashboard application, and used TailwindCSS for reusable component styling. We also used static mockup JSON files for the data that is being presented in the application.
## Challenges we ran into
For the first 4 hours, we regret jumping straight to code without any rough wireframes. We've listed features, components, and screens that we can divide up and code, but with no visuals to work for, we've struggled to code for the first few hours.
## Accomplishments that we're proud of
Making the app! We're pretty proud of how much we built in 8-20 hours that is functional with our current technical skills.
## What we learned
To prioritize key tasks for the MVP completion. Keeping each other constantly updated on what we were working on or any blockers we encountered helped keep things moving in case anyone else on the team had the experience to solve a specific bug or issue.
We could've planned better how reusable we wanted components to be and how we could conceive the data to hydrate the UI, but we were able to keep a good workflow and adapt to each other's work to complete our MVP.
## What's next for RxPress - Prescriptions made easy.
Our project would benefit from having input from their end users. We have different profiles that would interact with the application and their feedback would be instrumental to continue iterating on RxPress.
We would like to have the capacity to read from the doctor's office patient database, to make sure the necessary information is available to create and send prescriptions to the pharmacies.
Refining the way the Notifications work for patients would be another thing to iterate on, so they can stay updated with their prescription fulfillment status on the go, initially by integrating email alerts, and eventually by developing a mobile version of RxPress.
The service monetization needs some development too, to make sure fees are convenient for pharmacies, couriers and patients.
|
## What it does
MemoryLane is an app designed to support individuals coping with dementia by aiding in the recall of daily tasks, medication schedules, and essential dates.
The app personalizes memories through its reminisce panel, providing a contextualized experience for users. Additionally, MemoryLane ensures timely reminders through WhatsApp, facilitating adherence to daily living routines such as medication administration and appointment attendance.
## How we built it
The back end was developed using Flask and Python and MongoDB. Next.js was employed for the app's front-end development. Additionally, the app integrates the Google Cloud speech-to-text API to process audio messages from users, converting them into commands for execution. It also utilizes the InfoBip SDK for caregivers to establish timely messaging reminders through a calendar within the application.
## Challenges we ran into
An initial hurdle we encountered involved selecting a front-end framework for the app. We transitioned from React to Next due to the seamless integration of styling provided by Next, a decision that proved to be efficient and time-saving. The subsequent challenge revolved around ensuring the functionality of text messaging.
## Accomplishments that we're proud of
The accomplishments we have achieved thus far are truly significant milestones for us. We had the opportunity to explore and learn new technologies that were previously unfamiliar to us. The integration of voice recognition, text messaging, and the development of an easily accessible interface tailored to our audience is what fills us with pride.
## What's next for Memory Lane
We aim for MemoryLane to incorporate additional accessibility features and support integration with other systems for implementing activities that offer memory exercises. Additionally, we envision MemoryLane forming partnerships with existing systems dedicated to supporting individuals with dementia. Recognizing the importance of overcoming organizational language barriers in healthcare systems, we advocate for the formal use of interoperability within the reminder aspect of the application. This integration aims to provide caregivers with a seamless means of receiving the latest health updates, eliminating any friction in accessing essential information.
|
losing
|
## Inspiration
On our way to QHacks, the national news had reported that heart disease is the second highest cause of death for Canadians. The causes of heart disease are numerous and include factors such as poor health lifestyles or genetic inheritance. We realized that even though some of these causes are unavoidable, steps can be taken in order to decrease one's risk. This led us to the idea of creating a tool which people could use to assess their chances of getting heart disease.
## What it does
The sleek iOS app, paired with a Machine Learning model, assesses certain points of an individual's health data in order to predict whether they are currently at risk of heart disease. The app includes the ability to sync with wearable tech such as a Fitbit in order to incorporate efficient analysis with real-time data. Also, records of previous syncs or tests are visualized through graphs so that the user would be able to track their progress throughout their journey to a healthy lifestyle.
## How we built it
**The Machine Learning Model** -
We accessed IBM's 2018 challenge to obtain the training data for our training. We used python and other related libraries such as sklearn and pandas to process the data to be trained. Specifically, we used the sklearn library to train a random forest model that achieves 89% accuracy on the testing data.
**The iOS App** -
The app was built completely through Swift on Xcode.
Using the Fireside Database by Firebase, we split data into "Profile" - such as age, height, weight and "Core Data" - palpitations, minutes of exercise/ week etc. This app is linked to a python server hosted by Flask on the backend to be connected to the machine learning model and other resources.
## Challenges we ran into
One challenge we ran into was pre-processing the training data. There were a lot of considerations to be made, such as which model to use, and if we should bin certain input parameters. We started with a Logistic Regression model but it suffered from accuracy. We next attempted a random forest model which gave us an acceptable accuracy and showed no signs of over-fitting.
Another challenge was learning new things on the spot. Most of us had some experience but overall, between building machine learning models, hosting backend servers and designing the front-end, all of us took on a challenge this weekend and learned a lot from each other and individually.
## Accomplishments that we're proud of
We built a fully functioning application using the newest frameworks and technologies which ultimately has the potential to help people.
## What we learned
The importance of designing was an especially important lesson that we learned this weekend. At the very start of the project, we drew out the entire workflow process and program architecture at a high level. This allowed us all to agree on how the modules would be structured and took out the ambiguity that often exists when working in a team. Speaking of teamwork, we definitely learned a lot in that regard as we split up work and took on our respective roles.
## What's next for cor.ai
Our next steps include expanding our app to fully include various wearable devices in order to maximize user benefits. We also plan to expand are input parameters using fields that users' will have an easier time accessing (removing cholesterol). Furthermore, for those at risk of heart disease, we want to provide mitigating steps specific to that person's profile. This can include showing the location of a nearby gym for those exercising too little, or suggesting healthier foods and recipes to lower cholesterol.
|
## Inspiration:
Our inspiration stems from the identification of two critical problems in the health industry for patients: information overload and inadequate support for patients post-diagnosis resulting in isolationism. We saw an opportunity to leverage computer vision, machine learning, and user-friendly interfaces to simplify the way diabetes patients interact with their health information and connect individuals with similar health conditions and severity.
## What it does:
Our project is a web app that fosters personalized diabetes communities while alleviating information overload to enhance the well-being of at-risk individuals. Users can scan health documents, receive health predictions, and find communities that resonate with their health experiences. It streamlines the entire process, making it accessible and impactful.
## How we built it:
We built this project collaboratively, combining our expertise in various domains. Frontend development was done using Next.js, React, and Tailwind CSS. We leveraged components from <https://www.hyperui.dev> to ensure scalability and flexibility in our project. Our backend relied on Firebase for authentication and user management, PineconeDB for the creation of curated communities, and TensorFlow for the predictive model. For the image recognition, we used React-webcam and Tesseract for the optical character recognition and data parsing. We also used tools like Figma, Canva, and Google Slides for design, prototyping and presentation. Finally, we used the Discord.py API to automatically generate the user communication channels
## Challenges we ran into:
We encountered several challenges throughout the development process. These included integrating computer vision models effectively, managing the flow of data between the frontend and backend, and ensuring the accuracy of health predictions. Additionally, coordinating a diverse team with different responsibilities was another challenge.
## Accomplishments that we're proud of:
We're immensely proud of successfully integrating computer vision into our project, enabling efficient document scanning and data extraction. Additionally, building a cohesive frontend and backend infrastructure, despite the complexity, was a significant accomplishment. Finally, we take pride in successfully completing our project goal, effectively processing user blood report data, generating health predictions, and automatically placing our product users into personalized Discord channels based on common groupings.
## What we learned:
Throughout this project, we learned the value of teamwork and collaboration. We also deepened our understanding of computer vision, machine learning, and front-end development. Furthermore, we honed our skills in project management, time allocation, and presentation.
## What's next for One Health | Your Health, One Community.:
In the future, we plan to expand the platform's capabilities. This includes refining predictive models, adding more health conditions, enhancing community features, and further streamlining document scanning. We also aim to integrate more advanced machine-learning techniques and improve the user experience. Our goal is to make health data management and community connection even more accessible and effective.
|
## Inspiration
Kingston's website is the place to go when having questions about having life in Kingston or when searching for events going on in the city, but navigating through the hundreds of the city's webpages for an answer can be gruesome. We were all interested in AI and wanted to challenge ourselves to build a chatbot website.
## What it does
Kingsley is a chatbot built to help residents in Kingston with their inquiries. It takes in user input, and responds with a helpful answer along with a link to where more information can be found on the city of Kingston website if applicable. It has an option for voice input and output for greater accessibility.
## How we built it
* Kingsley uses a GPT-3 model fine-tuned on data from the city of Kingston website.
* The data was scraped using Beautiful Soup.
* A GloVe model was used to find website links relevant to the user's question.
* Jaccard similarity was used to find relevant text that specifically mentioned key words in the user's question.
* Relevant texts were narrowed down and passed as part of the prompt to GPT-3 for an answer completion.
* The website along with the voice functionality were created using React.
## Challenges we ran into
The CityOfKingston website has a huge amount of pages, a lot of which are archived, calendars, or not very useful. OpenAI's API on the other hand only allowed a limited context. so to have the bot be able to read relevant pages as its context, we had to go through multiple methods of data filtering to find the relevant pages.
We spent a great amount of time implementing speech-to-text and text-to-speech for our webapp. Many of the solutions on the internet were of little help, and we tried using several npm packages before being successful in the end.
## Accomplishments that we're proud of
We successfully made a working chatbot!
And it references real facts! (sometimes)
## What we learned
Throughout the project, we gained experience working with various APIs. We learned how to use and combine different natural language processing techniques to optimize accuracy and computation time. We learned React hooks useState and useEffect, Javascript functions, and how to use React developer tools to debug components in Chrome. We figured out how to link backend Flask with frontend app, setup a domain, and use text-to-speech and speech-to-text libraries.
## What's next for Kingsley
Due to free trial limits, we chose to use the Ada GPT model for our chatbot. In the future if we had more credits, we could use a better version of GPT-3 in order to produce more relevant and helpful results.
We are also interested in expanding Kingsley to reference data from other websites. It can also be adapted as an extension or floating popup that can be used directly on top of Kingston's website.
|
losing
|
## Inspiration
We're students, and that means one of our biggest inspirations (and some of our most frustrating problems) come from a daily ritual - lectures.
Some professors are fantastic. But let's face it, many professors could use some constructive criticism when it comes to their presentation skills. Whether it's talking too fast, speaking too *quietly* or simply not paying attention to the real-time concerns of the class, we've all been there.
**Enter LectureBuddy.**
## What it does
Inspired by lackluster lectures and little to no interfacing time with professors, LectureBuddy allows students to signal their instructors with teaching concerns at the spot while also providing feedback to the instructor about the mood and sentiment of the class.
By creating a web-based platform, instructors can create sessions from the familiarity of their smartphone or laptop. Students can then provide live feedback to their instructor by logging in with an appropriate session ID. At the same time, a camera intermittently analyzes the faces of students and provides the instructor with a live average-mood for the class. Students are also given a chat room for the session to discuss material and ask each other questions. At the end of the session, Lexalytics API is used to parse the chat room text and provides the instructor with the average tone of the conversations that took place.
Another important use for LectureBuddy is an alternative to tedious USATs or other instructor evaluation forms. Currently, teacher evaluations are completed at the end of terms and students are frankly no longer interested in providing critiques as any change will not benefit them. LectureBuddy’s live feedback and student interactivity provides the instructor with consistent information. This can allow them to adapt their teaching styles and change topics to better suit the needs of the current class.
## How I built it
LectureBuddy is a web-based application; most of the developing was done in JavaScript, Node.js, HTML/CSS, etc. The Lexalytics Semantria API was used for parsing the chat room data and Microsoft’s Cognitive Services API for emotions was used to gauge the mood of a class. Other smaller JavaScript libraries were also utilised.
## Challenges I ran into
The Lexalytics Semantria API proved to be a challenge to set up. The out-of-the box javascript files came with some errors, and after spending a few hours with mentors troubleshooting, the team finally managed to get the node.js version to work.
## Accomplishments that I'm proud of
Two first-time hackers contributed some awesome work to the project!
## What I learned
"I learned that json is a javascript object notation... I think" - Hazik
"I learned how to work with node.js - I mean I've worked with it before, but I didn't really know what I was doing. Now I sort of know what I'm doing!" - Victoria
"I should probably use bootstrap for things" - Haoda
"I learned how to install mongoDB in a way that almost works" - Haoda
"I learned some stuff about Microsoft" - Edwin
## What's next for Lecture Buddy
* Multiple Sessions
* Further in-depth analytics from an entire semester's worth of lectures
* Pebble / Wearable integration!
@Deloitte See our video pitch!
|
## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out!
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
winning
|
## Inspiration
The most important part in any quality conversation is knowledge. Knowledge is what ignites conversation and drive - knowledge is the spark that gets people on their feet to take the first step to change. While we live in a time where we are spoiled by the abundance of accessible information, trying to keep up and consume information from a multitude of sources can give you information indigestion: it can be confusing to extract the most relevant points of a new story.
## What it does
Macaron is a service that allows you to keep track of all the relevant events that happen in the world without combing through a long news feed. When a major event happens in the world, news outlets write articles. Articles are aggregated from multiple sources and uses NLP to condense the information, classify the summary into a topic, extracts some keywords, then presents it to the user in a digestible, bite-sized info page.
## How we built it
Macaron also goes through various social media platforms (twitter at the moment) to perform sentiment analysis to see what the public opinion is on the issue: displayed by the sentiment bar on every event card! We used a lot of Google Cloud Platform to help publish our app.
## What we learned
Macaron also finds the most relevant charities for an event (if applicable) and makes donating to it a super simple process. We think that by adding an easy call-to-action button on an article informing you about an event itself, we'll lower the barrier to everyday charity for the busy modern person.
Our front end was built on NextJS, with a neumorphism inspired design incorporating usable and contemporary UI/UX design.
We used the Tweepy library to scrape twitter for tweets relating to an event, then used NLTK's vader to perform sentiment analysis on each tweet to build a ratio of positive to negative tweets surrounding an event.
We also used MonkeyLearn's API to summarize text, extract keywords and classify the aggregated articles into a topic (Health, Society, Sports etc..) The scripts were all written in python.
The process was super challenging as the scope of our project was way bigger than we anticipated! Between getting rate limited by twitter and the script not running fast enough, we did hit a lot of roadbumps and had to make quick decisions to cut out the elements of the project we didn't or couldn't implement in time.
Overall, however, the experience was really rewarding and we had a lot of fun moving fast and breaking stuff in our 24 hours!
|
The Book Reading Bot (brb) programmatically flips through physical books, and using TTS reads the pages aloud. There are also options to download the pdf or audiobook.
I read an article on [The Spectator](http://columbiaspectator.com/) how some low-income students cannot afford textbooks, and actually spend time at the library manually scanning the books on their phones. I realized this was a perfect opportunity for technology to help people and eliminate repetitive tasks. All you do is click start on the web app and the software and hardware do the rest!
Another use case is for young children who do not know how to read yet. Using brb, they can read Dr. Seuss alone! As kids nowadays spend too much time on the television, I hope this might lure kids back to children books.
On a high level technical overview, the web app (bootstrap) sends an image to a flask server which uses ocr and tts.
|
## Inspiration
Often times, news outlets with a political stance can be misguiding people with a one-sided story. We aim to tackle this issue by making it easy for people to hear alternative voices so we can prevent political polarization and biased views.
## What it does
It allows users to paste an article URL into our web app called "unbiased?". Then, unbiased? will use a machine learning library to provide a visual diagram on whether the article is tilted towards left-wing or right-wing. Also, our web app will be provide an article that looks at the same topic from an alternative point of view.
## How we built it
We trained a model using TensorFlow to help determine the political bias of a news article. On top of that, we used Azure's Bing News Search API to help us retrieve articles that are on the other side of the political spectrum. We used react.JS for the front-end as well as Figma to help design the site.
## Challenges we ran into
We were having trouble training our AI to identify articles' political stance correctly. With the time constraint, we find it hard to allow our AI to cycle through enough data to allow it to be able to make accurate decisions. Nevertheless, after our hard work, we have managed to get to work with ~80% accuracy rate.
Given the short timeframe, it was difficult to get the necessary data to extensively train the model. The text scraped from the articles also need a lot of cleaning, so there is a lot of noise in the training data. Designing and training the neural network also took valuable time to train, time that could have been used to improve the front end.
Implementing the back end with the front end was difficult, as none of us had prior experience with TensorFlow and it's related challenges in creating good models as well as generating something that the front end could understand.
## Accomplishments that we're proud of
We are proud of being here and creating something that can potentially change the world. Even though the overall process is very hard, we learned a lot today from our very helpful mentors and organizers. We believe in the future, we can take what we learned today and use it to create something even bigger.
## What we learned
We learned how to deploy web-apps. Also, we learned how to effectively use TensorFlow and web scraping APIs like Diffbot to make our web-app function
## What's next for Unbiased?
We will be adding more features as well as training our AI rigorously to allow it to be even more accurate and useful
|
winning
|
Domain name: MedicationDedication.io
## Inspiration
Drugs are often taken incorrectly due to infrequency of consumption and misused or stolen. Furthermore, drugs can be stored at poor temperatures and go bad.
## What it does
Medication Dedication is a IoT project that aims to monitor pill usage and storage. Our project aims to monitor usage through an attachment of sensors on a pill bottle. The sensors log whenever the bottle is opened, how full the bottle is, and what temperature the bottle is at. Data is stored in a database and viewable on a phone app.
# How its built
The pill bottle attachment is made with an IR sensor, temperature sensor, and velostat. These sensors are attached to an Arduino/Particle Photon. The Particle Photon logs the data to Azure IoT Hub which then outputs the data using a Stream Analytics job. The job sends data to an Azure SQL Database.
Our phone application is built using Flutter. The phone app is able to show an activity log and set alarms. The app accesses data through a NodeJS server running on Azure App Service. The server is connected to the Azure SQL Database that stores all the IoT data. A SMS reminder is sent through Twilio whenever an alarm is triggered.
## Challenges we ran into
Our team has had some hardware issues. It was also the first time we used Flutter, Azure, and Twilio.
## Accomplishments that we're proud of
We used a lot of new technologies!
|
## Inspiration
This project was inspired by a team member’s family, his grandparents always have to take medicine but often forget about it. Not only his grandparents forget the medicine also his mom. Although, his mom is very young but in a very fast paced society nowadays people always forget to do small things like taking their pills. Due to this inspiration, we decided to develop a pill reminder, but then we got inspired by a Tik Tok video about a person who has Parkinson’s disease and he couldn’t pick up an individual pill from the container. In end, we decide to create this project that will resolve the problem of people forgetting to take their pills as well as helping people to easily take individual pills.
## What it does
Our project the Delta Dispenser uses an app to communicate with the database to set up a specific time to alert users to take their pills as well as tracking their pills information in the app. The hardware of Delta Dispenser will alert the user when the time is reached and automatically dispense the correct amount of the pills into the container.
## How we built it
The frontend of the app is made with **Flutter**, the app communicates with a **firebase real-time database** to store medicinal and scheduling information for the user. The physical component uses an **embedded microcontroller** called an ESP-32 which we chose for its ability to connect to WiFi and be able to sync with the firebase database to know when to dispense the pills.
## Challenges we ran into
The time constraint was definitely a big challenge and we accounted for that by deciding which features were most important in emphasizing our main idea for this project. These parts include the mechanical indexer of the pills, the interface the user would interact with, and how the database would look for communication with the app and the embedded device.
## Accomplishments that we're proud of
We are most proud of how this project utilized many different aspects of engineering, from mechanical to electrical and software. Our team did a really good job at communicating throughout the design process which made integration at the end much easier.
## What we learned
During this project, we had learned how to flutter to create a mobile app as well as learning how firebase works. Throughout this project, although we only learned a few skills that will be very useful in the future. The most important part was that we were able to develop upon the skills we already had. For example, now we are able to develop hardware that could communicate through firebase.
## What's next for Delta Dispenser
The next steps for the Delta Dispenser include building a fully 3D printed prototype, along with the control box and hopper as shown in the CAD renders. On the software side, we would also like to add the ability for more complicated drug scheduling, while keeping the UI easy enough for anyone to set up. Having another portal that allows a doctor to directly input the information themselves is also a feature we are interested in having.
|
## \*\* Internet of Things 4 Diabetic Patient Care \*\*
## The Story Behind Our Device
One team member, from his foot doctor, heard of a story of a diabetic patient who almost lost his foot due to an untreated foot infection after stepping on a foreign object. Another team member came across a competitive shooter who had his lower leg amputated after an untreated foot ulcer resulted in gangrene.
A symptom in diabetic patients is diabetic nephropathy which results in loss of sensation in the extremities. This means a cut or a blister on a foot often goes unnoticed and untreated.
Occasionally, these small cuts or blisters don't heal properly due to poor blood circulation, which exacerbates the problem and leads to further complications. These further complications can result in serious infection and possibly amputation.
We decided to make a device that helped combat this problem. We invented IoT4DPC, a device that detects abnormal muscle activity caused by either stepping on potentially dangerous objects or caused by inflammation due to swelling.
## The technology behind it
A muscle sensor attaches to the Nucleo-L496ZG board that feeds data to a Azure IoT Hub. The IoT Hub, through Trillo, can notify the patient (or a physician, depending on the situation) via SMS that a problem has occurred, and the patient needs to get their feet checked or come in to see the doctor.
## Challenges
While the team was successful in prototyping data aquisition with an Arduino, we were unable to build a working prototype with the Nucleo board. We also came across serious hurdles with uploading any sensible data to the Azure IoTHub.
## What we did accomplish
We were able to set up an Azure IoT Hub and connect the Nucleo board to send JSON packages. We were also able to aquire test data in an excel file via the Arduino
|
partial
|
## What Vaccify Does ?
We want to encourage vaccination and establish a safe environment,
as well as detect non-vaccinated areas and make people aware of their
surroundings. We feel that the more individuals who are vaccinated,
the better off everyone is. With the increased need for vaccination,
it is critical that individuals remain mindful of their surroundings.
The app's objective is to encourage vaccination while also creating
a safe atmosphere. It also detects unvaccinated areas and keeps people
aware of their surroundings. We tried create a safe atmosphere for our
future. We have developed a location-based software to encourage
vaccination while also creating a safe environment. The software
will identify non-vaccinated areas and keep users aware of
their surroundings.
## Exclusive features
**VMap** -
This feature can identify and depict the vaccination status of a zone and list out possible cluster formation regions using geofencing.
**Vitalert** -
This intelligent feature can bring out the awareness of an individual's surrounding. It presents the vaccination status of people surrounding you using geolocation.
## How We Built It ?
Vaccify was built using flutter and and was UI designed was using Adobe XD
and several API's From Google Cloud are being used for the features.
##Challenges we ran into ?
We were flutter beginners, so we had some difficulties, but we learnt a lot in the process. We were confident in flutter after developing our app, and we were extremely delighted with our result.
## What's next for Vaccify?
Vaccify was developed for local areas at first, but we have plans to expand it internationally.
|
## Inspiration
Given the current state of the COVID-19 pandemic, the recent approvals for numerous vaccines are a promising step towards returning to our normal lives. However, the fast tracked research and development processes for preparing this vaccine have caused many people to doubt its effectiveness and are concerned about potential side effects associated with it. Though this is a valid concern, there are many misconceptions about the vaccine’s side effects including rumours of microchips being implanted via the injection, or it potentially altering your DNA. For the average person, such rumours are enough to cause distrust in the available vaccines and many people make judgments without taking the time to fully understand the vaccine. This tool is meant for the average person to understand the covid situation and the importance of the vaccine in a concise manner.
## What it does
The web page displays covid-19 statistics in a user friendly interface. It also includes information regarding the covid-19 vaccine as an effort to reduce the stigma surrounding it. Additionally, there is a list of relevant news articles that are displayed based on the region the user is currently viewing.
## How we built it
From this past weekend, we are extremely proud of our efforts and the web-page our team came up with. More importantly, we achieved our goal of creating a page that de-stigmatizes the Covid-19 Vaccine.
## Challenges we ran into
We ran into challenges regarding connecting our various different data fields to the front end. We were pulling data from various sources such as covid-19 data and news from all provinces/territories as well as the country as a whole. Because of this, it was difficult to create a method to quickly update our current visualizations and access the new data fields without changing the visuals that we were using. Fixing this took a lot of trial and error and we attempted several solutions but learning how to create an adaptive visual and implementing a quick and flexible backend will definitely help with future projects.
We also wanted to build an algorithm that could predict future covid-19 hotspots using current data. Unfortunately, due to time constraints, we were not able to do so but it is something we would definitely like to work toward going forward.
## Accomplishments that we're proud of
This web-page was built using a react-template by MaterialUI for the frontend and express.js for the backend. Our backend is hosted on repl.it. For all the data displayed on the page, the Government of Canada Covid data API is used, and yahoo news is scaped to show relevant news articles based on the region selected by the user.
## What we learned
While integrating our application into a react theme template, we got to experience and play around with react structure and styling that none of us were familiar with from the template. As we started adding and modifying more code, we got to understand the design and the react component cycle of the theme, and got quite fluent at adding, modifying, restructuring components as we required by the time we completed our app. As this experience has definitely made us more comfortable with using themes and templates, this will become a useful skill for future projects since we will be able to do integration faster and more prepared.
## What's next for Still thinkin bout it
We hope to eventually develop a smart algorithm that can predict future covid 19 risk using existing data. We also hope to improve the UI and add more data fields such as daily changes fields visualized. The end goal of our web app is not only to eliminate misinformation surrounding the virus but also to act as an aggregator for all covid-19 related news and information for Canadian citizens so they can quickly learn everything they need to know about the virus without having to visit several sources.
|
**DO YOU** hate standing at the front of a line at a restaurant and not knowing what to choose? **DO YOU** want to know how restaurants are dealing with COVID-19? **DO YOU** have fat fingers and hate typing on your phone's keyboard? Then Sizzle is the perfect app for you!
## Inspiration
We wanted to create a fast way of getting important information for restaurants (COVID-19 restrictions, hours of operation, etc...). Although there are existing methods of getting the information, it isn't always kept in one place. Especially in the midst of a global epidemic, it is important to know how you can keep yourself safe. That's why we designed our app so that the COVID-19 accommodations are visible straight away. (Sort of like Shazam or Google Assistant but with a camera and restaurants instead)
## What it does
To use Sizzle, simply point it at any restaurant sign. An ML computer vision model then applies text-recognition to recognize the text. This text is then input into a Google Scraper Function, which returns information about the restaurant, including the COVID-19 accommodations.
## How it's built
We built Sizzle in Java, using the Jsoup library. The ML Computer vision model was built using Firebase. The app itself was built in Android Studio and also coded in Java. We used Figma to draft working designs for the app.
## Challenges
Our team members are from 3 different timezones, so it was challenging finding a time where we could all work together. Moreover, for many of us, this was our first time working extensively with Android Studio, so it was challenging to figure out some of the errors and syntax. Finally, the Jsoup library kept malfunctioning, so we had to find a way to implement it properly (despite how frustrating it became).
## Accomplishments
Our biggest accomplishment would probably be completing our project in the end. Despite not including all the features we initially wanted to, we were able to implement most of our ideas. We encountered a lot of roadblocks throughout our project (such as using the Jsoup library), but were able to overcome them which was also a big accomplishment for us.
## What I learned
Each of us took away something different from this experience. Some of us used Android Studio and coded in Java for the first time. Some of us went deeper into Machine Learning and experimented with something new. For others, it was their first time using the Jsoup library or even their first time attending a hackathon. We learned a lot about organization, teamwork, and coordination. We also learned more about Android Studio, Java, and Machine Learning.
## What's next?
Probably adding more information to the app such as the hours of operation, address, phone number, etc...
|
losing
|
# The Ultimate Water Heater
February 2018
## Authors
This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan.
## About
Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource.
Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future.
Some key features we have:
* 90% energy efficiency
* An average rate of roughly 10 kW/hr of energy consumption
* Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure
* Clean and easily understood UI for typical household users
* Incorporation of the Internet of Things for convenience of use and versatility of application
* Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\*
* Cheap cost of installation and immediate returns on investment
## Inspiration
By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs.
So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better.
## How It Works
Our project essentially operates in several core parts running simulataneously:
* Arduino (101)
* Heating Mechanism
* Mobile Device Bluetooth User Interface
* Servers connecting to the IoT (and servicing via Alexa)
Repeat all processes simultaneously
The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity.
The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;)
The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger.
The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age.
## Built With:
(In no particular order of importance...)
* RhoAI
* R
* Balsamiq
* C++ (Arduino 101)
* Node.js
* Tears
* HTML
* Alexa API
* Swift, Xcode
* BLE
* Buckets and Water
* Java
* RXTX (Serial Communication Library)
* Mathematica
* MatLab (assistance)
* Red Bull, Soylent
* Tetrix (for support)
* Home Depot
* Electronics Express
* Breadboard, resistors, capacitors, jumper cables
* Arduino Digital Temperature Sensor (DS18B20)
* Electric Tape, Duct Tape
* Funnel, for testing
* Excel
* Javascript
* jQuery
* Intense Sleep Deprivation
* The wonderful support of the people around us, and TreeHacks as a whole. Thank you all!
\*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2>
Special thanks to our awesome friends Michelle and Darren for providing moral support in person!
|
## Inspiration
We are two college students renting a house by ourselves with a high energy bill due to heating during Canada’s winter. The current solutions in the market are expensive AND permanent. We cannot make permanent changes to a house we rent, and we couldn’t afford them to begin with.
## What it does
Kafa is a thermostat enhancer with an easy installation that lets you remove it at any time with no assistance. There’s no need to get handy playing with electrical wires and screwdrivers. Just simply take Kafa out from the box, clip over your existing thermostat, and slide in the battery. If you switch apartments, offices, or dorm rooms, take Kafa with you. Simply clip off!
Kafa saves you money in installation fees, acquisition of hardware, and power bill. It keeps track of your usage patterns and even allows you to set up power saving mode.
## How we built it
The Kafa body was modelled using Fusion 360. The CAD models for the electronic components were sourced from Grab CAD. Everything else was modelled from scratch.
For the electronics we used an SG 90 servo that we hacked, an analog to digital converter, Raspberry pi Zero, a buck converter, temperature sensor, RGB LED, a potentiometer, and a battery we took from a camera light. We 3D printed the body of Kafa so that it would hold the individual components together in a compact manner. We then wired it all up together.
On the software side, Kafa is built using docker containers, which makes it highly portable, modular, secure and scalable. These containers run flask web apps that serve as controllers and actuators easily accessible by any browser enabled device; to store data we use a container running a MySQL database.
## Challenges we ran into
The most challenging aspect of the physical design was staying true to the premise of “easy installation” by coming up with non-permanent methods of attachment to the thermostats at our home. We wanted to design something that didn’t use screws, bolts, glue, tape, etc. Designing the case to be compact whilst planning for cable management was also hard.
The most challenging part of the software development was the servo calibration which allows it to adapt to any thermostat dial. To accomplish this, we had to 'hack' the servo and solder a cable to the variable resistor in order to read its position.
## Accomplishments that we're proud of
The most rewarding aspect of the physical design was accurately predicting the behaviour of the physical components and how they would fit once inside the compact case. Foreseeing and accounting for all possible issues that would come up in manufacturing whilst still in the CAD program made for the construction of our project to run much more smoothly (mostly).
The accomplishment, with regards to software, that we are most proud of is that everything is containerized. This means that in order to replicate our setup you just need to run the docker images in the destination devices.
## What we learned
One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective engineering disciplines (mechanical and computer). We also learned about the potential of IoT devices to be applied in the most simple and unforeseen ways.
## What's next for Kafa - Thermostat Enhancer
To improve Kafa in future iterations we would like to:
* Optimize circuitry to use low power, Wi-Fi enabled MCU so that battery life lasts months instead of hours
* Implement a learning algorithm so that Kafa can infer your active hours and save even more electricity
* Develop universal attachment mechanisms to fit any brand and shape of thermostat.
## Acknowledgments
* [docker-alpine-pigpiod - zinen](https://github.com/zinen/docker-alpine-pigpiod)
* [Nest Thermostat Control - Dal Hundal](https://codepen.io/dalhundal/pen/KpabZB)
* [Raspberry Pi Zero W board - Vittorinco](https://grabcad.com/library/raspberry-pi-zero-w-board-1)
* [USB Cable - 3D-2D CAD Design](https://grabcad.com/library/usb-cable-31)
* [Micro USB Plug - Yuri Malina](https://grabcad.com/library/micro-usb-plug-1)
* [5V-USB-Booster - Erick Robles](https://grabcad.com/library/5v-usb-booster-1)
* [Standard Through Hole Potentiometer (Vertical & Horizontal) - Abel Villanueva](https://grabcad.com/library/standard-through-hole-potentiometer-vertical-horizontal-1)
* [SG90 - Micro Servo 9g - Tower Pro - Matheus Frasson](https://grabcad.com/library/sg90-micro-servo-9g-tower-pro-1)
* [Volume Control Rotary Knobs - Kevin Yu](https://grabcad.com/library/volume-control-rotary-knobs-1)
* [Led RGB 5mm - Terrapon Théophile](https://grabcad.com/library/led-rgb-5mm)
* [Pin Headers single row - singlefonts](https://grabcad.com/library/pin-headers-single-row-1)
* [GY-ADS1115 - jalba](https://grabcad.com/library/gy-ads1115-1)
|
## Inspiration and what it does
As smart as our homes (or offices) become, they do not fully account for the larger patterns in electricity grids and weather systems.
their environments. They still waste energy cooling empty buildings, or waste money by purchasing electricity during peak periods. Our project, The COOLest hACk, solves these problems. We use sensors to detect both ambient temperature in the room and on-body temperature. We also increase the amount of cooling when electricity prices are cheaper, which in effect uses your building as a energy storage device. These features simultaneously save you money and the environment.
## How we built it
We built it using particle photons and infrared and ambient temperature sensors. These photons also control a fan motor and leds, representing air conditioning. We have a machine learning stack to forecast electricity prices. Finally, we built an iPhone app to show what's happening behind the scenes
## Challenges we ran into
Our differential equation models for room temperature were not solvable, so we used a stepwise approach. In addition, we needed to find a reliable source of time-of-day peek electricity prices.
## Accomplishments that we're proud of
We're proud that we created an impactful system to reduce energy used by the #1 energy hungry appliance, Air Conditioning. Our solution has minimal costs and works through automated means.
## What we learned
We learned how to work with hardware, Photons, and Azure.
## What's next for The COOLest hACk
For the developers: sleep, at the right temperature ~:~
|
winning
|
## 💡 INSPIRATION 💡
We wanted to solve a pressing global problem. While we were going through the brainstorming phase, a member on our team read an article titled “Recognizing Fake News Now a Required Subject in California Schools”, which inspired the idea of a gamified app to discern fake news. In today’s digital age, the sheer volume of information at our fingertips can be overwhelming. With the rise of social media and the constant flow of news, it has become increasingly challenging to distinguish between credible information and misinformation. Fake news not only spreads rapidly but also has the potential to influence public opinion, incite unrest, and undermine trust in legitimate sources, ultimately threatening our democracy. We built Blindspot in order to solve this problem by training young adults to distinguish between fake and real news.
## ⚙️ WHAT IT DOES ⚙️
Blindspot is a game that presents the user with a series of news articles-- some articles are fake, others are real. Articles are presented one at a time, and each time, the player's goal is to determine whether the article they are reading is fake or real. As the user advances in this game, fake articles will feel increasingly real, making the game more difficult.
## 🛠️ HOW WE BUILT IT 🛠️
Our frontend is built with Next.js, React, and TypeScript. In the backend, we connected a Python Flask API with OpenAI’s GPT-4o using LangChain to generate fake articles to show to players. We also use the NewsAPI to fetch real articles to provide a mix of real and fake.
## 😣 CHALLENGES WE RAN INTO 😣
It turns out, it's incredibly difficult to find a dataset of high-quality recent news articles. With the recent push in AI, massive proprietary datasets of high-quality content are heavily paywalled, and even APIs are heavily rate-limited. As such, we had to get creative to find content that is convincing.
We also found that modern LLMs are safeguarded pretty heavily, so convincing one to generate high-quality fake news was a challenge. Through much prompt engineering, we were able to get the models to generate very realistic & convincing fake news (scary!), where often times even *our own team* had a <80% accuracy rate!
## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉
Initially, we had several ideas that we wanted to implement, but we were able to combine them in a way that made our final idea (Blindspot) better than any of the initial ones.
Many of our team members were unfamiliar with TypeScript, but we were able to ramp up quickly enough in order to help with the frontend.
Despite the time constraints, we are also proud that we were able to successfully connect the backend to the frontend in a short amount of time.
The game is also addictively fun/hilarious to play! Our team enjoyed plenty of good laughs while building the app.
## 📚 WHAT WE LEARNED 📚
We learned to collaborate effectively as a team. Before this occasion, we were strangers, but within a day, we merged our varied ideas, allocated tasks based on individual strengths, implemented frontend and backend code, and integrated them to develop an MVP.
A few team members were new to LLM models. Through this hackathon, we discovered the remarkable potential and user-friendliness of these models in crafting exceptional products.
## ⏭️ WHATS NEXT ⏭️
Blindspot is set to revolutionize how users engage with news and enhance their media literacy. We plan to introduce daily or weekly challenges to keep users engaged and returning to the app regularly. To further involve our community, we’ll allow users to submit articles they encounter for verification and inclusion in the game, making the experience more interactive and user-driven. A multiplayer mode will enable users to compete in real-time to identify fake news, adding a competitive edge to the learning process. We’ll also provide advanced analytics, giving users detailed insights into their performance and highlighting areas for improvement. Additionally, we’ll include in-depth educational modules on media literacy, the psychology of misinformation, and fact-checking techniques, ensuring users are equipped with the knowledge to navigate the complex media landscape effectively.
|
## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
|
## Inspiration
We were inspired to make an application which checks news reliability since we noticed the ever increasing amount of misinformation in our digital world.
## What it does
Simply input the news headline into the input box and press submit. The algorithm will then classify it as a reliable, or unreliable source.
## How we built it
We used Tensorflow with the Keras API to build a model with word embeddings in 32 dimensional space. Then we used a dense layer to make the final model predictions Finally, we used flask to connect the model with the front end HTML and CSS code.
## Challenges we ran into
The most challenging part in building the model was dealing with over fitting because of limited initial training data. This caused the model to perform in training, but not work as well in testing.
## Accomplishments that we're proud of
We are most proud of using data augmentation to increase the size of our training set which resulted in the model having 97% accuracy on validation. Our team was also very organized, respectful of each other, and most importantly, we all had a really fun time!
## What we learned
We learned a lot about flask as our team had nearly no experience with technology. We also learned how to integrate our machine learning model into a website!
## What's next for NewsDetectives
We will continue to make our model more accurate, and to make more intricate and interactive website designs!
|
winning
|
## Inspiration
Life is short and fast-paced, filled with fleeting moments of joy, achievements, and connection. During Hack the North, our team met so many amazing and inspiring people in such a short amount of time. It struck us how easy it is to forget these meaningful interactions and experiences as life rushes on. This inspired us to create Flashback—a VR experience designed to memorialize these cherished moments and allow users to revisit them in a deeply immersive and personalized museum.
Unlike traditional social media, which encourages constant sharing with others, Flashback offers a personal and introspective journey through your own memories. It’s designed for the individual, allowing users to relive their most cherished moments in an immersive, meaningful way.
#### A quote that resonated with us this weekend:
Life is not measured by time. It is measured by moments. There is a limit to how much you can embrace a moment. But there is no limit to how much you can appreciate it.
Bonus: Great for ~~us~~ forgetful folks!
## What it does
Flashback is a VR experience that transforms your personal memories and achievements into interactive, immersive museum exhibits. Users can upload photos, videos, personal audio clips, and music to create unique 3D galleries, where each memory comes to life. As you walk up to an exhibit, specific music, audio clips, and captions are triggered, bringing the memory to life in a dynamic way. Memories can also be grouped into collections. Instead of just scrolling through pictures, Flashback lets you step into your memories—hear familiar voices, see cherished moments, and relive experiences in a fully immersive environment. Additionally, there is a web app where users can upload, update, and maintain their growing museum of memories. Flashback evolves with you over time, offering a place to revisit positive memories on difficult days, and preserve fleeting moments like our time at Hack the North.
## How we built it
Flashback is built with React, Node.js, JavaScript, Express.js, Convex, HTML, CSS, Spotify APIand Material UI for the web app's front and back end. The VR experience is developed using Unity, .NET, and C#, with testing done on a Meta Quest VR headset. Our mascot Framey was drawn up by our team mate Jenn.
## Challenges we ran into
* Picking up Convex
* Designing on Figma for the first time
* Unity and C# are HARD (our teams first time making a VR project)
* learning new tech is hard
* merge requests on front end is hard
* Sleep deprivation
* figuring out how to connect and integrate client, server, and VR
## Accomplishments that we're proud of
* Everything we made! We worked very hard!
* Adapting and persevering through a lot of roadblocks this weekend
* Creating a super cool VR experience (shoutout to Alan!!!)
* First time designer making a hi-fidelity mock up of the web app and VR user flows in Figma
* Spending time together and having fun as Hack the North 2024
## What we learned
Everyone on our team had experience in different technologies, but this weekend we each tried doing something new- using Convex DB integration for the first time, designing for the first time, learning C# and Unity, trying out front-end development, and creating our first VR project! Additionally, we learned that unity is hard and doing research and regularly communication about feasibility is extremely important.
## What's next for Flashback
* integration authentication and allow users to visit other museums!
* more customizability - users can choose their own VR assets to personalize their space and memories
* more fields for memories - multi-media, video, and personal audio upload
* more interactiveness in VR environment
|
## Inspiration
When you think of the word “nostalgia”, what’s the first thing you think of? Maybe it’s the first time you tried ice cream, or the last concert you’ve been to. It could even be the first time you’ve left the country. Although these seem vastly different and constitute unique experiences, all of these events tie to one key component: memory. Can you imagine losing access to not only your memory, but your experiences and personal history? Currently, more than 55 million people world-wide suffer from dementia with 10 million new cases every year. Many therapies have been devised to combat dementia, such as reminiscence therapy, which uses various stimuli to help patients recall distant memories. Inspired by this, we created Remi, a tool to help dementia victims remember what’s important to them.
## What it does
We give users the option to sign up as a dementia patient, or as an individual signing up on behalf of a patient. Contributors to a patient's profile (friends and family) can then add to the profile descriptions of any memory they have relating to the patient. After this, we use Cohere API to piece together a personalized narration of the memory; this helps patients remember all of their most heartening past memories. We then use an old-fashioned styled answering machine, created with Arduino, and with the click of a button patients can listen to their past memories read to them by a loved one.
## How we built it
Our back-end was created using Flask, where we utilized Kintone API to store and retrieve data from our Kintone database. It also processes the prompts from the front-end using Cohere API in order to generate our personalized memory message; as well, it takes in button inputs from an Arduino UNO to play the text-to-speech. Lastly, our front-end was built using ReactJS and TailwindCSS for seamless user interaction.
## Challenges we ran into
As it was our first time setting up a Flask back-end and using Kintone, we ran into a couple of issues. With our back-end, we had trouble setting up our endpoints to successfully POST and GET data from our database. We also ran into troubles setting up the API token with Kintone to integrate it into our back-end. There were also several hardware limitations with the Arduino UNO, which was probably not the most suitable board for this project. The inability to receive wire-less data or handle large audio files were drawbacks, but we found workarounds that would allow us to demo properly, such-as using USB communication.
## Accomplishments that we're proud of
* Utilizing frontend, backend, hardware, and machine learning libraries in our project
* Delegating tasks and working as a team to put our project together
* Learning a lot of new tech and going to many workshops!
## What we learned
We learned a lot about setting up databases and web frameworks!
## What's next for Remi
Since there were many drawbacks with the hardware, future developments in the project would most likely switch to a board with higher bandwidth and wifi capabilities, such as a Raspberry Pi. We also wanted to add in a feature where contributors to a patient's memory library could record their voices, and our Text-to-Speech AI would mimic them when narrating a story. Reminiscence therapy involves more than narrating memories, so we wanted to add ore sensory features, such as visuals, to invoke memories and nostalgia for the patient as effectively as possible. For example, contributors could submit a picture of their memories along with their description, which could be displayed on an LCD attached to the answering machine design. On the business-end, we hope to collaborate with health professionals to see how we can further help dementia victims.
|
## Inspiration
Countless individuals often search through YouTube for various mediation melodies, such as "Ocean Waves," "Wind Ambiance," and "Trees Swaying." With AMPLIFY, an individual is able to listen to these mediation melodies fused together with Lo-fi elements of music generated based on the environment the individual is located in at a specific moment in time, all while whether the individual is at the beach, the mountains, a metropolitan area, or either walking, running, cycling, or roadtripping.
## Process Of Building
Components
1. Frontend / Webpage (HTML / JavaScript)
2. Upload Image File → Drag File From Local → Send Image → Payload Base 64
3. Send File To Backend
4. Backend (Google Cloud Platform + Node)
5. Process File
6. Send File To Google Cloud Platform Vision API
7. Formulate Results To Send To Magenta
8. Machine Learning (Python / Magenta / MusicVAE)
9. Train MusicVAE
10. Generate Mediation Melodies Through Interpolating Between Various Note Sequences
## Challenges
Backend (Google Cloud Platform + Node)
* The Google Cloud Platform Vision API was not able to recognize the images in an accurate manner, so we instead recognized the images through detecting the differing colors in the image. For example, if the detected colors in the image were perhaps either red, orange, or other bright colors, the generated melodies would then be a bit more upbeat and cheerful. However, if the detected colors in the image were blue, purple, or other dark colors, the generated melodies would then be a bit more calming and serene.
Machine Learning (Python / Magenta / MusicVAE)
* MusicVAE is a hierarchical variational autoencoder that learns a summarized representation of musical qualities as a latent space, and encodes a musical sequence into a latent vector, in which can then be later decoded back into a musical sequence. We at first desired to use this pretrained MusicVAE model, however we ended up needing to personalize this MusicVAE model and thus needed to train our own MusicVAE model to generate certain parts of latent space we needed.
|
winning
|
## Inspiration
Meet Doctor DoDo. He is a medical practioner, and he has a PhD degree in EVERYTHING!
Dr. DoDo is here to take care of you, and make sure you do-do your homework!
Two main problems arise with virtual learning: It's hard to verify if somebody understands content without seeing their facial expressions/body language, and students tend to forget to take care of themselves, mentally and physically. At least we definitely did.
## What it does
Doctor DoDo is a chrome extension and web-browser "pet" that will teach you about the images or text you highlight on your screen (remember his many PhD's), while keeping watch of your mood (facial expressions), time spent studying, and he'll even remind you to fix your posture (tsk tsk)!
## How we built it
With the use of:
* ChatGPT-4
* Facial Emotion recognition
* Facial Verification recognition
* Voice recognition
* Flask
* Pose recognition with mediapipe
and Procreate for the art/animation,
## Challenges we ran into
1. The initial design was hard to navigate. We didn't know if we wanted him to mainly be a study buddy, include a pomodoro timer, etc.
2. Animation (do I need to say more)
3. Integration hell, backend and frontend were not connecting
## Accomplishments that we're proud of
We're proud of our creativity! But more proud of Dr. DoDo for graduating with a PhD in every branch of knowledge to ever exist.
## What we learned
1. How to integrate backend and frontend software within a limited time
2. Integrating more
3. Animation (do I need to say more again)
## What's next for Doctor DoDo
Here are some things we're adding to Dr. DoDo's future:
Complete summaries of webpages
|
## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
|
## Inspiration
As students, we understand the stress that builds up in our lives. Furthermore, we know how important it is to reflect on the day and plan how to improve for tomorrow. It might be daunting to look for help from others, but wouldn't an adorable dino be the perfect creature to talk to your day about? Cuteness has been scientifically proven to increase happiness, and our cute dino will always be there to cheer you up! We want students to have a way to talk about their day, get cheered up by a cute little dino buddy, and have suggestions on what to focus on tomorrow to increase their productivity! DinoMind is your mental health dino companion to improve your mental wellbeing!
## What it does
DinoMind uses the power of ML models, LLMs, and of course, cute dinos (courtesy of DeltaHacks of course <3)!! Begin your evening by opening DinoMind and clicking the **Record** button, and tell your little dino friend all about your day! A speech-to-text model will transcribe your words, and save the journal entry in the "History" tab. We then use an LLM to summarize your day for you in easy to digest bullet points, allowing you to reflect on what you accomplished. The LLM then creates action items for tomorrow, allowing you to plan ahead and have some sweet AI-aided productivity! Finally, your dino friend gives you an encouraging message if they notice you're feeling a bit down thanks to our sentiment analysis model!
## How we built it
Cloudflare was our go-to for AI/ML models. These model types used were:
1. Text generation
2. Speech-to-text
3. Text classification (in our case, it was effectively used for sentiment analysis)
We used their AI Rest API, and luckily the free plan allowed for lots of requests!
Expo was the framework we used for our front-end, since we wanted some extra oomph to our react native application.
## Challenges we ran into
A small challenge was that we really really wanted to use the Deltahacks dino mascots for this year in our application (they're just so cute!!). But there wasn't anything with each one individually online, so we realized we could take photos of the shirts and extra images of the dinos from that!!
As for the biggest challenges, that was integrating our Cloudflare requests with the front-end. We had our Cloudflare models working fully with test cases too! But once we used the recording capabilities of react native and tried sending that to our speech-to-text model, everything broke. We spent far too long adding `console.log` statements everywhere, checking the types of the variables, the data inside, hoping somewhere we'd see what the difference was in the input from our test cases and the recorded data. That was easily our biggest bottleneck, because once we moved past it, we had the string data from what the user said and were able to send it to all of our Cloudflare models.
## Accomplishments that we're proud of
We are extremely proud of our brainstorming process, as this was easily one of the most enjoyable parts of the hackathon. We were able to bring our ideas from 10+ to 3, and then developed these 3 ideas until we decided that the mental health focused journaling app seemed the most impactful, especially when mental health is so important.
We are also proud of our ability to integrate multiple AI/ML models into our application, giving each one a unique and impactful purpose that leads to the betterment of the user's productivity and mental wellbeing. Furthermore, majority of the team had never used AI/ML models in an application before, so seeing their capabilities and integrating them into a final product was extremely exciting!
Finally, our perseverance and dedication to the project carried us through all the hard times, debugging, and sleepless night (singular, because luckily for our sleep deprived bodies, this wasn't a 2 night hackathon). We are so proud to present the fruits of our labour and dedication to improving the mental health of students just like us.
## What we learned
We learned that even though every experience we've had shows us how hard integrating the back-end with the front-end can be, nothing ever makes it easier. However, your attitude towards the challenge can make dealing with it infinitely easier, and enables you to create the best product possible.
We also learned a lot about integrating different frameworks and the conflicts than can arise. For example, did you know that using expo (and by extension, react native), you make it impossible to use certain packages?? We wanted to use the `fs` package for our file systems, but it was impossible! Instead, we needed to use the `FileSystem` from `expo-file-system` :sob:
Finally, we learned about Cloudflare and Expo since we'd never used those technologies before!
## What's next for DinoMind
One of the biggest user-friendly additions to any LLM response is streaming, and DinoMind is no different. Even ChatGPT isn't always that fast at responding, but it looks a lot faster when you see each word as it's produced! Integrating streaming into our responses would make it a more seamless experience for users as they are able to immediately see a response and read along as it is generated.
DinoMind also needs a lot of work in finding mental health resources from professionals in the field that we didn't have access to during the hackathon weekend. With mental wellbeing at the forefront of our design, we need to ensure we have professional advice to deliver the best product possible!
|
winning
|
## Inspiration
Over 50 million people suffer from epilepsy worldwide, making it one of the most common neurological diseases. With the rise in digital adoption, people who suffer from epilepsy are at risk from online videos that may trigger seizure responses. This can have adverse health effects on the lives of epileptic individuals and inhibits their interaction with technology. Often, there are warnings on videos that may trigger seizures, however, not much effort is made to resolve the problem. Our goal is clear, to provide a solution that proactively solves the problem and increases accessibility for this target group.
## What it does
We built a chrome extension that interacts with YouTube videos and provides users with a warning in advance of seizure-inducing content and applies a filter to allow people to continue watching the video.
## How we built it
We first started off by building our flask server that has an open endpoint where the youtube video url is passed. This video is then downloaded to our server using PyTube and passed to openCv which determines the luminance values across the video. This dataset is then pre-processed and passed to the Azure Anomaly Detection model which determines anomalies in the luminance dataset. These anomalies represent portions in the youtube video where there are large discrepancies in light variation and hence, could trigger potential seizures for photosensitive epileptic users. The anomaly dataset is then processed to determine the timestamps in the video where these events occur. This dataset is then returned to a google chrome extension which overlays a dark filter over the video during these anomaly timestamps, thus preventing a potential seizure.
## Challenges we ran into
The first major challenge we faced was attempting to use Google Cloud Services as our anomaly detector for our luminance dataset as this service required us to package the data into a model and then create an anomaly detection model. As we were inexperienced in this subject, we were unable to complete this objective, this prevented us from using Google Cloud Services and caused us to lose precious development time. Another challenge that we faced was with the framerate of our downloaded videos. In our first iteration of creating the mvp, we were originally processing the youtube videos at a frame rate of 8 frames per second while creating the video timestamps on the anomaly data at a frame rate of 30 frames per second. This caused the extension to overlay the seizure warning filter at incorrect times of the video. Once we noticed that we were incorrectly downloading the videos at a slower frame rate, we were able to swiftly fix this issue.
## What's next for Epilepsy Safe Viewer
The first thing that we wanted to do as a team was some form of user testing on the product to determine the effectiveness of the product while also determining potential improvement areas. This would allow us to reiterate through our design process and make the product even better at solving our targeted user group’s problem. We also wanted to try applying our extension to users that face startle epilepsy by determining anomalies in the audio decibel levels of Youtube videos and normalizing the audio based on this. Lastly, we wanted to look into potential opportunities of expanding this product to other video platforms such as Netflix and TikTok, thus increasing the inclusivity and accessibility of this user group with technology.
|
## Inspiration
Osu! players often use drawing tablets instead of a normal mouse and keyboard setup because a tablet gives more precision than a mouse can provide. These tablets also provide a better way to input to devices. Overuse of conventional keyboards and mice can lead to carpal tunnel syndrome, and can be difficult to use for those that have specific disabilities. Tablet pens can provide an alternate form of HID, and have better ergonomics reducing the risk of carpal tunnel. Digital artists usually draw on these digital input tablets, as mice do not provide the control over the input as needed by artists. However, tablets can often come at a high cost of entry, and are not easy to bring around.
## What it does
Limestone is an alternate form of tablet input, allowing you to input using a normal pen and using computer vision for the rest. That way, you can use any flat surface as your tablet
## How we built it
Limestone is built on top of the neural network library mediapipe from google. mediapipe hands provide a pretrained network that returns the 3D position of 21 joints in any hands detected in a photo. This provides a lot of useful data, which we could probably use to find the direction each finger points in and derive the endpoint of the pen in the photo. To safe myself some work, I created a second neural network that takes in the joint data from mediapipe and derive the 2D endpoint of the pen. This second network is extremely simple, since all the complex image processing has already been done. I used 2 1D convolutional layers and 4 hidden dense layers for this second network. I was only able to create about 40 entries in a dataset after some experimentation with the formatting, but I found a way to generate fairly accurate datasets with some work.
## Dataset Creation
I created a small python script that marks small dots on your screen for accurate spacing. I could the place my pen on the dot, take a photo, and enter in the coordinate of the point as the label.
## Challenges we ran into
It took a while to tune the hyperparameters of the network. Fortunately, due to the small size it was not too hard to get it into a configuration that could train and improve. However, it doesn't perform as well as I would like it to but due to time constraints I couldn't experiment further. The mean average error loss of the final model trained for 1000 epochs was around 0.0015
Unfortunately, the model was very overtrained. The dataset was o where near large enough. Adding noise probably could have helped to reduce overtraining, but I doubt by much. There just wasn't anywhere enough data, but the framework is there.
## Whats Next
If this project is to be continued, the model architecture would have to be tuned much more, and the dataset expanded to at least a few hundred entries. Adding noise would also definitely help with the variance of the dataset. There is still a lot of work to be done on limestone, but the current code at least provides some structure and a proof of concept.
|
## Inspiration
Recently, we came across a BBC article reporting a teenager having seizure was saved by online gamer - 5,000 miles away in Texas, via alerting medical services. Surprisingly, the teenager's parents weren't aware of his condition despite being in the same home.
## What it does
Using remote photoplethysmography to measure one's pulse rate using webcam, and alerting a friend/guarding via SMS if major irregularities are detected.
## How we built it
We created a React app that implements some openCV libraries in order to control the webcam and detect the user's face to try to detect the heart rate. We deployed onto Google Cloud Engine and made use of StdLib to handle sending SMS messages.
## Challenges we ran into
Java > Javascript
## Accomplishments that we're proud of
Cloud deployment, making a React app, using Stdlib, using Figma.
## What we learned
## What's next for ThumpTech
|
winning
|
## Inspiration
Oftentimes, students are misguided by the information from a non-credible source and are exposed to the risks of getting scammed with sublease/purchases. To solve these underlying problems, we decided to create a platform targeting college students as the only audience.
## What it does
EPPEY is an anonymous community where college students can freely share their opinions and helpful information with their school's/college students, sell or buy items on a school exclusive marketplace, get a recommendation for school courses, and many other functions permitted ONLY within their school community.
[](https://github.com/hwuiwon/eppey-yhack)
## How we built it
We used React Native and Expo for the frontend and used Google Cloud Platform (Identity Platform, API Gateway, Functions, SQL, and Firebase Cloud Messaging) for the backend. Look at the image below for an overall diagram of our architecture.
[](https://github.com/hwuiwon/eppey-yhack)
## Challenges we ran into
We couldn't implement all of the features we wanted our platform to have due to time constraints. Otherwise, all of our development phases went smoothly as expected.
## Accomplishments that we're proud of
Under 36 hours, we have succeeded in building a working product that can be released to the public, implementing most of the features that we have initially planned. By building serverless architecture with Google Cloud Services, our product is easily scalable and cost-efficient as we get billed only on our actual usage.
## What's next for Eppey
We will keep scaling our platform and make it into a real business.
|
## Inspiration
Students entering University are often faced with many financial burdens that prevent them from reaching their highest levels of success. Many students succumb to the increased costs and avoid signing up for or receiving benefits that hinder them in their educational path. We believe that students should not face these difficulties and should have an easy way to allow people to support their journey. Current solutions such as creating a goFundMe or wishlist of products often fail to have their desired impact since students cannot fully explain their backstory or motivation. Donors often have no way to hold the student accountable to success and are completely in the darkness about how their funds are used. We aim to solve these issues by providing a transparent portal for students to list the products they need for their University and be able to receive support from generous benefactors. Donors have the peace of mind that their funds are directly being used to procure resources that will benefit the student and not have to worry about fund misappropriation. Our end goal is to create a trackable portal for students that will maximize their open opportunities.
## What it does
Our application creates a portal where students can create a custom list of items that they require to be able to attend university. Upon entering our site, donors will be able to browse through the list of items and subsequently choose how they wish to empower the students. They will be able to select and item to learn more about it. Finally, they will be prompted to pay for it which will take the user information and pass it to stdlib that will make stripe api calls to charge payments.
## How I built it
We used Vue.js, an innovative framework that combines multiple aspects of web development such as templates, scripts and designs together into singular files. Without having experience in this particular framework, it involved a steep learning curve for our teammates but was made easier by the framework that allows easy importing of libraries and components. To execute our api calls, we decided to use Standard Library. This was motivated due to several reasons. The primary reason was the ability to encapsulate multiple api calls and functions together into a new simplified api call. We were able to assign redundant variables in advance and only pass in relevant information. The second reason that motivated us to make use of standard library was security. Adding api calls to our codebase would have exposed api-keys and sensitive data. By wrapping these calls, we were able to secure our information and prevent malicious use of the code even if inspected. Within standard library, we used their latest Autocode feature to develop supported api calls to send text messages upon payment for a product. Furthermore, we designed custom actions that would create invoice items, bundle them together into an invoice and finally bill a customer. These were carried out with the Stripe API and allows for quick and easy billing of users. Together, this system allows students to receive immediate confirmation for products that were purchased for them and the donor to be charged automatically in a secure manner.
## Challenges I ran into
Vue.js was a completely new framework to us and we were unfamiliar with how to use it. The framework is different from typical web development by combining the various different parts of web development together into one file. Another challenge we ran into was deploying to google app engine. There was not much documentation on how to deploy a vue project.
## Accomplishments that I'm proud of
Proud of having learned how to use std library and the many benefits of using it. We managed to secure our api calls and not expose any vital information in the process.
## What I learned
We learned how to use Vue.js and hope to use it in future projects. Also worked in a team where all teammates met each other on the day of the hackathon. Learned how to quickly understand teammates skills and get effective in finishing a project.
## What's next for Empower Me
Creating a way to keep track and hold students accountable. Add loading bars and progress trackers as well as dynamic changing of the site as different payments are processed.
|
## Inspiration
I've always been inspired by the notion that even as just **one person** you can make a difference. I really took this to heart at DeltaHacks in my attempt to individually create a product that could help individuals struggling with their mental health by providing **actionable and well-studied techniques** in a digestible little Android app. As a previous neuroscientist, my educational background and research in addiction medicine has shown me the incredible need for more accessible tools for addressing mental health as well as the power of simple but elegant solutions to make mental health more approachable. I chose to employ a technique used in Cognitive Behavioral Therapy (CBT), one of (if not the most) well-studied mental health intervention in psychological and medical research. This technique is called automatic negative thought (ANT) records.
Central to CBT is the principle that psychological problems are based, in part, on faulty/unhelpful thinking and behavior patterns. People suffering from psychological problems can learn better ways of coping with them, thereby relieving their symptoms and becoming more effective in their lives.
CBT treatment often involves efforts to change thinking patterns and challenge distorted thinking, thereby enhancing problem-solving and allowing individuals to feel empowered to improve their mental health. CBT automatic negative thought (ANT) records and CBT thought challenging records are widely used by mental health workers to provide a structured way for patients to keep track of their automatic negative thinking and challenge these thoughts to approach their life with greater objectivity and fairness to their well-being.
See more about the widely studied Cognitive Behavioral Therapy at this American Psycological Association link: [link](https://www.apa.org/ptsd-guideline/patients-and-families/cognitive-behavioral)
Given the app's focus on finding objectivity in a sea of negative thinking, I really wanted the UI to be simple and direct. This lead me to take heavy inspiration from a familiar and nostalgic brand recognized for its bold simplicity, objectivity and elegance - "noname". [link](https://www.noname.ca/)
This is how I arrived at **noANTs** - i.e., no (more) automatic negative thoughts
## What it does
**noANTs** is a *simple and elegant* solution to tracking and challenging automatic negative thoughts (ANTs). It combines worksheets from research and clinical practice into a more modern Android application to encourage accessibility of automatic negative thought tracking.
See McGill worksheet which one of many resources which informed some of questions in the app: [link](https://www.mcgill.ca/counselling/files/counselling/thought_record_sheet_0.pdf)
## How I built it
I really wanted to build something that many people would be able to access and an Android application just made the most sense for something where you may need to track your thoughts on the bus, at school, at work or at home.
I challenged myself to utilize the newest technologies Android has to offer, building the app entirely in Jetpack Compose. I had some familiarity using the older Fragment-based navigation in the past but I really wanted to learn how to utilize the Compose Navigation and I can excitedly say I implemented it successfully.
I also used Room, a data persistence library which provided an abstraction layer for the SQLite database I needed to store the thought records which the user generates.
## Challenges I ran into
This is my first ever hackathon and I wanted to challenge myself to build a project alone to truly test my limits in a time crunch. I surely tested them! Designing this app with a strict adherence to NoName's branding meant that I needed to get creative making many custom components from scratch to fit the UI style I was going for. This made even ostensibly simple tasks like creating a slider, incredibly difficult, but rewarding in the end.
I also had far loftier goals with how much I wanted to accomplish, with aspirations of creating a detailed progress screen, an export functionality to share with a therapist/mental-health support worker, editing and deleting and more. I am nevertheless incredibly proud to showcase a functional app that I truly believe could make a significant difference in people's lives and I learned to prioritize creating and MVP which I would love to continue building upon in the future.
## Accomplishments that I'm proud of
I am so proud of the hours of work I put into something I can truly say I am passionate about. There are few things I think should be valued more than an individual's mental health, and knowing that my contribution could make a difference to someone struggling with unhelpful/negative thinking patterns, which I myself often struggle with, makes the sleep deprivation and hours of banging my head against the keyboard eternally worthwhile.
## What I learned
Being under a significant time crunch for DeltaHacks challenged me to be as frugal as possible with my time and design strategies. I think what I found most valuable about both the time crunch, my inexperience in software development, and working solo was that it forced me to come up with the simplest solution possible to a real problem. I think this mentality should be approached more often, especially in tech. There is no doubt a place, and an incredible allure to deeply complex solutions with tons of engineers and technologies, but I think being forced to innovate under constraints like mine reminded me of the work even one person can do to drive positive change.
## What's next for noANTs
I have countless ideas on how to improve the app to be more accessible and helpful to everyone. This would start with my lofty goals as described in the challenge section, but I would also love to extend this app to IOS users as well. I'm itching to learn cross-platform tools like KMM and React Native and I think this would be a welcomed challenge to do so.
|
losing
|
Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales
#

"Reduce Reuse Recoin"
## Theme Covered:
The themes covered in this project include post pandemic restoration for both the environment, small buisnesses, and personal finance! The app pitched uses an extensivly trained AI system to detect trash and sort it to the proper bin from your smartphone. While using the app, users will be incentivized to use the app and recover the environment through the opportunity to earn points, which will be redeemable in partnering stores.
## Problem Statment:
As our actions continue to damage the environment, it is important that we invest in solutions that help restore our community in more sustainable practices. Moreover, an average person creates over 4 pounds of trash a day, and the EPA has found that over 75% of the waste we create are recyclable. As garbage sorting is so niche from town-to-town, students have reportable agreed to the difficulty of accurately sorting garbage, thus causing this significant misplacement of garbage.
Our passion to make our community globally and locally more sustainable has fueled us to use artificial intelligence to develop an app that not only makes sorting garbage as easy as using Snapchat, but also rewards individuals for sorting their garbage properly.
For this reason, we would like to introduce Recoin. This intuitive app allows a person to scan any product and easily find the bin that the trash belongs based off their location. Furthermore, if they attempt to sell their product, or use our app, they will earn points which will be redeemable in partnering stores that advocate for the environment. The more the user uses the app, the more points they receive, resulting in better items to redeem in stores. With this app we will not only help recover the environment, but also increase sales in small businesses which struggled during the pandemic to recover.
## About the App:
### Incentive Breakdown:

Please note that these expenses are estimated expectations for potential benefit packages but are not defined yet.
We are proposing a $1 discount for participating small businesses when 100 coffee/drink cups are returned to participating restaurants. This will be easy for small companies to uphold financially, while providing a motivation for individuals to use our scanner.
Amazon costs around $0.5 to $2 on packaging, so we are proposing that Amazon provides a $15 gift card per 100 packages returned to Amazon. As the 100 packages can cost from $50 to $200, this incentive will save Amazon resources by 5 to 100 times the amount, while providing positive public perception for reusing.
As recycling plastic for 3D filament is an up-and-coming technology that can revolutionize environment sustainability, we would like to create a system where providing materials for such causes can give the individuals benefits.
Lastly, as metals become more valuable, we hope to provide recyclable metals to companies to reduce their expenses through our platform.
The next steps to this endeavor will be to provide benefits for individuals that provide batteries and electronics with some sort of incentive as well.
## User Interface:

## Technological Specifics and Next Steps:

### Frontend

We used to React.JS to develop components for the webcam footage and capture screen shots. It was also utilized to create the rest of the overall UI design.
### Backend
#### Waste Detection AI:

On Pytorch, we utilized an open-source trash detection AI software and data, to train the trash detection system originally developed by IamAbhinav03. The system utilizes over 2500 images to train, test, and validate the system. To improve the system, we increased the number of epochs to 8 rather than 5 (number of passes the training system has completed). This allowed the accuracy to increase by 4% more than the original system. We also modified the test/train ratio and split amounts to 70%, 10%, and 20% respectively, as more prominent AI studies have found this distribution to receive the best results.
Currently, the system is predicted to have a 94% accuracy, but in the future, we plan on using reinforcement learning in our beta testing to continuously improve our algorithm. Reinforcement learning allows for the data to be more accurate, through learning from user correction. This will allow AI to become more precise as it gains more popularity.
A flask server is used to make contact with the waste detection neural network; an image is sent from the front end as a post request, the Flask server generates a tensor and runs that through the neural net, then sends the response from the algorithm back to the front end. This response is the classification of the waste as either cardboard, glass, plastic, metal, paper or trash.
#### Possible next steps:
By using Matbox API and the Google Suite/API, we will be creating maps to find recycling locations and an extensively thorough Recoin currency system that can easily be transferred to real time money for consumers and businesses (as shown in the user interface above).
## Stakeholders:
After the completion of this project, we intend to continue to pursue the app to improve our communities’ sustainability. After looking at the demographic of interest in our school itself, we know that students will be interested in this app, not only from convenience but also through the reward system. Local cafes and Starbucks already have initiatives to improve public perspective and support the environment (i.e., using paper straws and cups), therefore supporting this new endeavor will be an interest to them. As branding is everything in a business, having a positive public perspective will increase sales.

## Amazon:
As Amazon continues to be the leading online marketplace, more packages will continue to be made, which can be detrimental to the world's limited resources. We will be training the UI to track packages that are Amazon based. With such training, we would like to be able to implement a system where the packaging can be sent back to Amazon to be reused for credit. This will allow Amazon to form a more environmentally friendly corporate image, while also saving on resources.
## Small Businesses:
As the pandemic has caused a significant decline in small business revenue, we intend to mainly partner with small businesses in this project. The software will also help increase small business sales as by supporting the app, students will be more inclined to go to their store due to a positive public image, and the additive discounts will attract more customers. In the future, we wish to train AI to also detect trash of value (i.e.. Broken smartphones, precious metals), so that consumers can sell it in a bundle to local companies that can benefit from the material (ex: 3D-printing companies that convert used plastic to filament)
## Timeline:
The following timeline will be used to ensure that our project will be on the market as soon as possible:

## About the Team:
We are first and second year students from Queen's University who are very passionate about sustainbility and designing of innovative solutions to modern day problems. We all have the mindset to give any task our all and obtain the best results. We have a diverse skillset in the team and throughout the hackathon we utlized it to work efficienty. We are first time hackathoners, so even though we all had respective expierence in our own fields, this whole expierence was very new and educationally rewarding for us. We would like to thank the organisers and mentors for all thier support and for organizing the event.
## Code References
• <https://medium.datadriveninvestor.com/deploy-your-pytorch-model-to-production-f69460192217>
• <https://narainsreehith.medium.com/upload-image-video-to-flask-backend-from-react-native-app-expo-app-1aac5653d344>
• <https://pytorch.org/tutorials/beginner/saving_loading_models.html>
• <https://pytorch.org/tutorials/intermediate/flask_rest_api_tutorial.html>
• <https://pytorch.org/get-started/locally/>
• <https://www.kdnuggets.com/2019/03/deploy-pytorch-model-production.html>
## References for Information
• <https://www.rubicon.com/blog/trash-reason-statistics-facts/>
• <https://www.dosomething.org/us/facts/11-facts-about-recycling>
• <https://www.forbes.com/sites/forbesagencycouncil/2016/10/31/why-brand-image-matters-more-than-you-think/?sh=6a4b462e10b8>
• <https://www.channelreply.com/blog/view/ebay-amazon-packaging-costs>
|
## Inspiration
The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle.
## What it does
RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different.
## How we built it
The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling.
## Challenges we ran into
The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble.
## Accomplishments that we're proud of
We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean.
## What we learned
First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals.
## What's next for RecyclAIble
RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come.
|
## Inspiration
Toronto suffers from a 26% contamination rate in their recycling, that is nearly a quarter of all the recycling completed does not actually end up being recycled. Contamination happens when non-recyclable materials or garbage end up in the recycling system, from leftover food in containers to non-recyclable plastics to clothing and propane tanks.
To address this problem, we created sort-it.
## What it does
The app helps everyone do their part in working towards a greener planet. Users open the app and take a photo of the trash that they are unsure about, and the app tells them where it goes! In addition to this feature, user can enter their phone number and the day of the week they wish to be reminded about garbage days.
## How we built it
Android app built with Java and Firebase ML kit, webserver with Go and Firebase Firestore
## Challenges we ran into
Successfully implemented ML kit
## Accomplishments that we're proud of
Having a finished product at the end of our first hackathon!
|
winning
|
## Inspiration
Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders.
## What it does
Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout.
## How we built it
The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive.
## Challenges we ran into
>
> I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry*
>
>
>
## Accomplishments that we're proud of
>
> This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry*
>
>
>
## What we learned
>
> Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry*
>
>
>
## What's next for Quick Quote
Things that are on our roadmap to improve Quick Quote:
* Apply google analytics to track user's movement and collect feedbacks to enhance our UI.
* Enhance our neural network model to enrich our knowledge base.
* Train our data with more evalution to give more depth
* Includes ads (mostly auto companies ads).
|
## Inspiration
Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them.
## What it does
CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph.
## How we built it
We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud.
## Challenges we ran into
Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way.
Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code.
The last challenge that we ran into was getting our front-end to play nicely with our backend code
## Accomplishments that we're proud of
We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs.
## What We learned
Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server.
## What's next for CarChart
We would like to expand the front-end to have even more functionality
Some of the features that we would like to include would be:
* Letting users pick lists of cars that they are interested and compare
* Displaying each datapoint with an image of the car
* Adding even more dimensions that the user is allowed to search by
## Check the Project out here!!
<https://pennapps-xx-252216.appspot.com/>
|
# Why Universitium?
Looking back to the times when we applied to colleges, we spent a lot of time searching for the universities and keeping track of the due date and essay submissions for each university that may be on different platforms. It was time-consuming to look over many websites for college information; it was especially tiring to apply for colleges at various platforms, not to mention keeping track of what essays needed to be submitted and the application due dates. Our team wanted to simplify this application process and have everything covered for future generations.
Universitium aims to reduce the time spent in the college search and make tracking the application process easier. It is different from other college application platforms as it:
* provides college information that students consider when applying for colleges
* creates a to-do list of the requirements for each university the user is applying to
* recommends colleges to the users according to their applied colleges.
# How We Built it
First, we designed the full stack architecture of Universitium. It is divided into three main components: the frontend, the backend, and the machine learning model. The frontend is connected to the backend via a REST API, and the backend will fetch data from the database and the machine learning model. We also designed the website UI in Figma before implementing the code.
## Frontend
Our UI design was done using Figma, and we implemented the design through React.js, which renders and dispenses HTML and CSS that was written exclusively by our team. We used axios library to fetch data from our REST API and show it on the website.
## Backend
We used Flask and Python for our web backend. We scraped thousands of rows of university data from US News and stored it in MySQL. We also stored user profile information in our MySQL database, then the desired information is retrieved from Flask. The retrieved data will then be sent to the frontend to be rendered and shown on the website.
## Machine Learning Model
Our machine learning model uses a collaborative filtering model that uses college information from US News as training data to recommend universities to users, taking into account over 20 different factors from academic strength to student experience.
# Challenges We Faced
We faced a lot of issues connecting backend to frontend. Having limited experience with axios, it took us a long time to fetch the data from the endpoints in our Flask application.
We also met challenges getting the output of the machine learning model.
# More to Universitium
As of now, Universitium is focused on facilitating an individual in the process of searching and applying for colleges. This means there are currently no methods for users to communicate on our website. In the future, we will build the user base by introducing user interaction. We have plans on adding the role of coaches, where users that are done with college applications can connect with students and provide helpful advice to those currently in their application progress.
Another important feature we will add is information about financial aid and scholarships. We understand that tuition is an important factor that students consider while applying to college, so we wanted our users to have access to this important information as well.
In terms of the structure of Universitium, another feature to improve our website will be to incorporate OAuth in our login authentication process. We will also use the Google API to allow users to edit their essays directly on our website. Other improvements include hosting the database and the website on a server.
# What We Learned
We had a lot of fun building Universitium from scratch. We had ambitious goals for our website but had to cut down many features due to time constraints. It was important for us to have a functioning website before we incorporate our fascinating ideas such as using machine learning to recommend colleges. Nonetheless, after the struggles, we have more confidence working on the full stack implementation and the connection between frontend and backend.
|
winning
|
## Inspiration
Our inspiration comes from many of our own experiences with dealing with mental health and self-care, as well as from those around us. We know what it's like to lose track of self-care, especially in our current environment, and wanted to create a digital companion that could help us in our journey of understanding our thoughts and feelings. We were inspired to create an easily accessible space where users could feel safe in confiding in their mood and check-in to see how they're feeling, but also receive encouraging messages throughout the day.
## What it does
Carepanion allows users an easily accessible space to check-in on their own wellbeing and gently brings awareness to self-care activities using encouraging push notifications. With Carepanion, users are able to check-in with their personal companion and log their wellbeing and self-care for the day, such as their mood, water and medication consumption, amount of exercise and amount of sleep. Users are also able to view their activity for each day and visualize the different states of their wellbeing during different periods of time. Because it is especially easy for people to neglect their own basic needs when going through a difficult time, Carepanion sends periodic notifications to the user with messages of encouragement and assurance as well as gentle reminders for the user to take care of themselves and to check-in.
## How we built it
We built our project through the collective use of Figma, React Native, Expo and Git. We first used Figma to prototype and wireframe our application. We then developed our project in Javascript using React Native and the Expo platform. For version control we used Git and Github.
## Challenges we ran into
Some challenges we ran into included transferring our React knowledge into React Native knowledge, as well as handling package managers with Node.js. With most of our team having working knowledge of React.js but being completely new to React Native, we found that while some of the features of React were easily interchangeable with React Native, some features were not, and we had a tricky time figuring out which ones did and didn't. One example of this is passing props; we spent a lot of time researching ways to pass props in React Native. We also had difficult time in resolving the package files in our application using Node.js, as our team members all used different versions of Node. This meant that some packages were not compatible with certain versions of Node, and some members had difficulty installing specific packages in the application. Luckily, we figured out that if we all upgraded our versions, we were able to successfully install everything. Ultimately, we were able to overcome our challenges and learn a lot from the experience.
## Accomplishments that we're proud of
Our team is proud of the fact that we were able to produce an application from ground up, from the design process to a working prototype. We are excited that we got to learn a new style of development, as most of us were new to mobile development. We are also proud that we were able to pick up a new framework, React Native & Expo, and create an application from it, despite not having previous experience.
## What we learned
Most of our team was new to React Native, mobile development, as well as UI/UX design. We wanted to challenge ourselves by creating a functioning mobile app from beginning to end, starting with the UI/UX design and finishing with a full-fledged application. During this process, we learned a lot about the design and development process, as well as our capabilities in creating an application within a short time frame.
We began by learning how to use Figma to develop design prototypes that would later help us in determining the overall look and feel of our app, as well as the different screens the user would experience and the components that they would have to interact with. We learned about UX, and how to design a flow that would give the user the smoothest experience. Then, we learned how basics of React Native, and integrated our knowledge of React into the learning process. We were able to pick it up quickly, and use the framework in conjunction with Expo (a platform for creating mobile apps) to create a working prototype of our idea.
## What's next for Carepanion
While we were nearing the end of work on this project during the allotted hackathon time, we thought of several ways we could expand and add to Carepanion that we did not have enough time to get to. In the future, we plan on continuing to develop the UI and functionality, ideas include customizable check-in and calendar options, expanding the bank of messages and notifications, personalizing the messages further, and allowing for customization of the colours of the app for a more visually pleasing and calming experience for users.
|
## Inspiration
With many Canadians facing mental health issues, we wanted to create a daily journaling app, which uses machine learning to recommend resources for mental well being.
## What it does
After a user records a 10 second video talking about their day, Menda uses emotion detection and facial recognition to recommend resources. Users can see a daily log of their well being, in which Menda curates personalized suggestions over time.
## How we built it
We used HTML/CSS for the Frontend, Firebase and Flask for the Backend, and OpenCV/Nltk for machine learning.
## Challenges we ran into
The biggest challenges that we faced were connecting ML models to Flask, and building entire application around Flask.
## Accomplishments that we're proud of
We are proud of our emotion detection function, sentiment analysis from speech to text, as well our minimalistic UX design.
## What we learned
We all took advantages of opportunities to improve on our technical skills. Although we've all participated in hackathons before, we still each picked up new skills. For example, everyone working on backend had the opportunity to experiment with Flask and Firebase, while those on frontend were able to enhance their HTML/CSS/JS skills.
## What's next for Menda
First, we want to create a community page, where others can share resources and discuss among peers. We also want to consider what happens as we increase the number of users since we store videos within our database. So, in order to scale, we would have to upgrade our Firebase database to be able to store more data. Lastly, we want to look for partnerships with related mental health organizations as well as apps that we utilize in Menda such as Spotify and Headspace!
|
## Inspiration
(<http://televisedrevolution.com/wp-content/uploads/2015/08/mr_robot.jpg>)
If you watch Mr. Robot, then you know that the main character, Elliot, deals with some pretty serious mental health issues. One of his therapeutic techniques is to write his thoughts on a private journal. They're great... they get your feelings out, and acts as a point of reference to look back to in the future.
We took the best parts of what makes a diary/journal great, and made it just a little bit better - with Indico. In short, we help track your mental health similar to how FitBit tracks your physical health. By writing journal entries on our app, we automagically parse through the journal entries, record your emotional state at that point in time, and keep an archive of the post to aggregate a clear mental profile.
## What it does
This is a FitBit for your brain. As you record entries about your live in the private journal, the app anonymously sends the data to Indico and parses for personality, emotional state, keywords, and overall sentiment. It requires 0 effort on the user's part, and over time, we can generate an accurate picture of your overall mental state.
The posts automatically embeds the strongest emotional state from each post so you can easily find / read posts that evoke a certain feeling (joy, sadness, anger, fear, surprise). We also have a analytics dashboard that further analyzes the person's longterm emotional state.
We believe being cognizant of one self's own mental health is much harder, and just as important as their physical health. A long term view of their emotional state can help the user detect sudden changes in the baseline, or seek out help & support long before the situation becomes dire.
## How we built it
The backend is built on a simple Express server on top of Node.js. We chose React and Redux for the client, due to its strong unidirectional data flow capabilities, as well as the component based architecture (we're big fans of css-modules). Additionally, the strong suite of redux middlewares such as sagas (for side-effects), ImmutableJS, and reselect, helped us scaffold out a solid, stable application in just one day.
## Challenges we ran into
Functional programming is hard. It doesn't have any of the magic that two-way data-binding frameworks come with, such as MeteorJS or AngularJS. Of course, we made the decision to use React/Redux being aware of this. When you're hacking away - code can become messy. Functional programming can at least prevent some common mistakes that often make a hackathon project completely unusable post-hackathon.
Another challenge was the persistance layer for our application. Originally, we wanted to use MongoDB - due to our familiarity with the process of setup. However, to speed things up, we decided to use Firebase. In hindsight, it may have cause us more trouble - since none of us ever used Firebase before. However, learning is always part of the process and we're very glad to have learned even the prototyping basics of Firebase.
## Accomplishments that we're proud of
* Fully Persistant Data with Firebase
* A REAL, WORKING app (not a mockup, or just the UI build), we were able to have CRUD fully working, as well as the logic for processing the various data charts in analytics.
* A sweet UI with some snazzy animations
* Being able to do all this while having a TON of fun.
## What we learned
* Indico is actually really cool and easy to use (not just trying to win points here). Albeit it's not always 100% accurate, but building something like this without Indico would be extremely difficult, and similar apis I've tried is not close to being as easy to integrate.
* React, Redux, Node. A few members of the team learned the expansive stack in just a few days. They're not experts by any means, but they definitely were able to grasp concepts very fast due to the fact that we didn't stop pushing code to Github.
## What's next for Reflect: Journal + Indico to track your Mental Health
Our goal is to make the backend algorithms a bit more rigorous, add a simple authentication algorithm, and to launch this app, consumer facing. We think there's a lot of potential in this app, and there's very little (actually, no one that we could find) competition in this space.
|
partial
|
## Inspiration
Everyone can relate to the scene of staring at messages on your phone and wondering, "Was what I said toxic?", or "Did I seem offensive?". While we originally intended to create an app to help neurodivergent people better understand both others and themselves, we quickly realized that emotional intelligence support is a universally applicable concept.
After some research, we learned that neurodivergent individuals find it most helpful to have plain positive/negative annotations on sentences in a conversation. We also think this format leaves the most room for all users to reflect and interpret based on the context and their experiences. This way, we hope that our app provides both guidance and gentle mentorship for developing the users' social skills. Playing around with Co:here's sentiment classification demo, we immediately saw that it was the perfect tool for implementing our vision.
## What it does
IntelliVerse offers insight into the emotions of whomever you're texting. Users can enter their conversations either manually or by taking a screenshot. Our app automatically extracts the text from the image, allowing fast and easy access. Then, IntelliVerse presents the type of connotation that the messages convey. Currently, it shows either a positive, negative or neutral connotation to the messages. The interface is organized similarly to a texting app, ensuring that the user effortlessly understands the sentiment.
## How we built it
We used a microservice architecture to implement this idea
The technology stack includes React Native, while users' information is stored with MongoDB and queried using GraphQL. Apollo-server and Apollo-client are used to connect both the frontend and the backend.
The sentiment estimates are powered by custom Co:here's finetunes, trained using a public chatbot dataset found on Kaggle.
Text extraction from images is done using npm's text-from-image package.
## Challenges we ran into
We were unfamiliar with many of the APIs and dependencies that we used, and it took a long to time to understand how to get the different components to come together.
When working with images in the backend, we had to do a lot of parsing to convert between image files and strings.
When training the sentiment model, finding a good dataset to represent everyday conversations was difficult. We tried numerous options and eventually settled with a chatbot dataset.
## Accomplishments that we're proud of
We are very proud that we managed to build all the features that we wanted within the 36-hour time frame, given that many of the technologies that we used were completely new to us.
## What we learned
We learned a lot about working with React Native and how to connect it to a MongoDB backend. When assembling everyone's components together, we solved many problems regarding dependency conflicts and converting between data types/structures.
## What's next for IntelliVerse
In the short term, we would like to expand our app's accessibility by adding more interactable interfaces, such as audio inputs. We also believe that the technology of IntelliVerse has far-reaching possibilities in mental health by helping introspect upon their thoughts or supporting clinical diagnoses.
|
## Inspiration
University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion!
## What it does
Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry.
## How we built it
We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI.
## Challenges we ran into
Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it.
## Accomplishments that we're proud of
We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app.
## What we learned
We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything.
## What's next for Companion
The next steps for Companion are:
* Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee.
* Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users.
## Sample account
If you can't register your own account for some reason, here is a sample one to log into:
Email: [demo@example.com](mailto:demo@example.com)
Password: password
|
## Inspiration
We are very interested in the idea of making computer understand human’s feeling from Mirum challenge. We apply this idea on calling center where customer support can’t see customers’ faces via phone calls or messages. Enabling the analysis of the emotional tone of consumers can help customer support understand their need and solve problems more efficient. Business can immediately see the detailed emotional state of the customers from voice or text messages.
## What it does
The text from customers are colored based on their tone. Red stands for anger, white stands for joy.
## How I built it
We utilize the iOS chat application from the Watson Developer Cloud Swift SDK to build this chat bot, and IBM Watson tone analyzer to examine the emotional tones, language tones, and social tones.
## Challenges I ran into
At the beginning, we had trouble running the app on iPhone. We spent a lot of time on debugging and testing. We also spent a lot of time on designing the graph of the analysis results.
## Accomplishments that I'm proud of
We are proud to show that our chat bot supports tone analysis and basic chatting.
## What I learned
We have learned and explored a few IBM Watson APIs. We also learned a lot while trouble shooting and fixing bugs.
## What's next for **Chattitude**
Our future plan for Chattitude is to color the text by sentence and make the interface more engaging. For the tone analysis result, we want to improve by presenting the real time animated analysis result as histogram.
|
partial
|
Problem: Have you ever been at a party and questioned who chose the music? Or debated who would be on aux?
Shuffle is an app that is designed to sync multiple user’s most played songs to create a combined playlist that everyone loves. The app requires you to have a Spotify account, so when you download the Shuffle app, your profile is loaded into the app. Shuffle then allows you to choose the people you want to create a playlist with, from 2 to 5 or more friends, the app then uses an algorithm to create a tailored playlist that contains music that everyone will love. It does this using underlying user data from Spotify that contains each user's favorite music and listening trends in the short, medium, and long term.
The app also gives users an easy way to add or remove songs onto the playlist after it has been created so the perfect songs are always playing.
Shuffle ensures that you and your friends will always have music that everyone can listen to together.
|
## Inspiration
As students who listen to music to help with our productivity, we wanted to not only create a music sharing application but also a website to allow others to discover new music, all through where they are located. We were inspired by Pokemon-Go but wanted to create a similar implementation with music for any user to listen to. Anywhere. Anytime.
## What it does
Meet Your Beat implements a live map where users are able to drop "beats" (a.k.a Spotify beacons). These beacons store a song on the map, allowing other users to click on the beacon and listen to the song. Using location data, users will be able to see other beacons posted around them that were created by others and have the ability to "tune into" the beacon by listening to the song stationed there. Multiple users can listen to the same beacon to simulate a "silent disco" as well.
## How I built it
We first customized the Google Map API to be hosted on our website, as well as fetch the Spotify data for a beacon when a user places their beat. We then designed the website and began implementing the SQL database to hold the user data.
## Challenges I ran into
* Having limited experience with Javascript and API usage
* Hosting our domain through Google Cloud, which we were unaccustomed to
## Accomplishments that I'm proud of
Our team is very proud of our ability to merge various elements for our website, such as the SQL database hosting the Spotify data for other users to access on the website. As well, we are proud of the fact that we learned so many new skills and languages to implement the API's and database
## What I learned
We learned a variety of new skills and languages to help us gather the data to implement the website. Despite numerous challenges, all of us took away something new, such as web development, database querying, and API implementation
## What's next for Meet Your Beat
* static beacons to have permanent stations at more notable landmarks. These static beacons could have songs with the highest ratings.
* share beacons with friends
* AR implementation
* mobile app implementation
|
## Inspiration for Creating sketch-it
Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives.
## What it does
Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time.
## How we built it
On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server.
On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image.
## Challenges we ran into
We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively.
## Accomplishments that we're proud of
Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎
## What we learned
We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future!
## What's next for sketch-it
Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience!
|
partial
|
## Hack the 6ix 2021: AquaTrack
An app designed for people who want to be more water conscious.
## Inspiration
For Hack the 6ix 2021, as environmental issues become ever more present in our society, we are all growing to be more conscious of our own effects on the environment. At times, we must all have questioned at least once whether we are creating a positive impact on the environment or whether we are damaging it. With such, we were inspired to create an application that will allow us to better track and understand our effect on the environment and so allow us to reflect on our habits and change them.
## What it does
This application is designed specifically for people who want to be more conscious with their household water use. This app will allow the users to gain insight to their water usage and also compare their progress throughout the week. The app has three main features:
* Tracking amount of water used: To track the amount of water used, users can choose to either add data based on a timer or a numerical value. The former would be useful for running waters while the latter is more useful for containers that store water. The user is then asked to input the rate of flow or the volume of the container that they used. The program then calculates the total amount of water used for that certain activity and records it down.
* Displaying the amount of water used today: Our program will also continuously update the total amount of water used that day and display it on the app.
* Compiling the daily data to a pie chart: The data for each data is then compiled and displayed in a chart, so the users can view their progress. We have made a weekly chart, monthly chart, yearly chart, and all-time chart.
## How we built it
We used android studio to code the entire program and decided on the JSON file format to store our data. We believe that the JSON file format is enough to store data since there is no need for any relationships between our data. However, if at some point we would further develop this application, we will migrate data into an sqlite database.
## Challenges we ran into
As a group filled with first-time hackers, the largest challenge that we faced was in initializing the project. Especially the part when we’re trying to figure out how we could use different tools and technologies, merging them into a single mobile app. We also encountered problems regarding the app that users might question, such as how the measurement of our app could be accurate. With the uncertainties and human errors that might occur in the measurement process, this is something that we could further develop. For technical issues, it is quite difficult to manage a system to log errors without impeding users' experience.
## Accomplishments that we're proud of
We were proud that we are able to come with a feasible solution to tackle the crucial issue of water scarcity within this hackathon. Moreover, we were able to complete our first hackathon project through extensive collaboration, meticulous research and by persevering through the project timeframe. Throughout the project, we learned what it is like to learn in a working environment where tasks are being carried out simultaneously to achieve a certain goal. Always being a learner and always inquiring what can we do better is something that our team were proud of.
## What we learned
We learned the practical use of programming in the real world. Being able to experience coding for an actual environmental issue allowed us to understand better about the struggles most programmers faced to produce the software and products we use in a daily basis. Furthermore, we learned about how github would be used in a team and how efficient it is in doing so.
## What's next for AquaTrack
Software:
* Account creation feature: Users will be able to create accounts for themselves. They will also get the option to calibrate their own water discharge from taps and showers. Otherwise, they can decide to opt out from calibrating, but we will warn them that the results will not be as accurate.
* Check out friends’ activities: Users can choose to make their account private or public and view friends’ activities.
* To do so, the application might migrate to an online database.
Hardware:
* Water-proof fitbit-like watch that is simple to use.
|
## Inspiration
Many people including me, spend way too much time and energy stressing about what to wear the next day. I looked everywhere for a solution but I was surprised to have found none. So, I decided to build my own.
## What it does
The user can upload images of articles of clothing that they take a picture of. They can then create outfits using a combination of these images. The user can then plan their outfits for the day of the week. The dashboard then displays the current day's outfit.
## How we built it
I used web technologies for the frontend, and Google Firebase for the database. The uploaded images are drawn to an HTML canvas and then converted into a final image. The database consists of three collections: one that stores the day with the corresponding outfit, one that stores an aggregate of all of the individual images uploaded by the user, and one that stores the final outfit the user created.
## Challenges we ran into
Creating the final outfit image from the user's uploaded images proved to be difficult. I tried using many APIs to handle the combing of the images, but none of them could efficiently vertically stack the images. I also dealt with a lot of bugs and issues with Firebase, as it was my first time using it in a project.
## Accomplishments that we're proud of
I am proud of creating a tool that is genuinely useful for me. I know that I will be using this project to help me in my day-to-day life, well after this competition.
## What we learned
I upgraded my skills in front-end and back-end development. I also learned a lot about DOM manipulation and deploying apps with firebase.
## What's next for Smart Fit
With the addition of a subscription service that provides additional features such as mobile app integration and more advanced planning features, I believe SmartFit will be a viable business.
|
## Inspiration
Fashion has always been a world that seemed far away from tech. We want to bridge this gap with "StyleList", which understands your fashion within a few swipes and makes personalized suggestions for your daily outfits. When you and I visit the Nordstorm website, we see the exact same product page. But we could have completely different styles and preferences. With Machine Intelligence, StyleList makes it convenient for people to figure out what they want to wear (you simply swipe!) and it also allows people to discover a trend that they favor!
## What it does
With StyleList, you don’t have to scroll through hundreds of images and filters and search on so many different websites to compare the clothes. Rather, you can enjoy a personalized shopping experience with a simple movement from your fingertip (a swipe!). StyleList shows you a few clothing items at a time. Like it? Swipe left. No? Swipe right! StyleList will learn your style and show you similar clothes to the ones you favored so you won't need to waste your time filtering clothes. If you find something you love and want to own, just click “Buy” and you’ll have access to the purchase page.
## How I built it
We use a web scrapper to get the clothing items information from Nordstrom.ca and then feed these data into our backend. Our backend is a Machine Learning model trained on the bank of keywords and it provides next items after a swipe based on the cosine similarities between the next items and the liked items. The interaction with the clothing items and the swipes is on our React frontend.
## Accomplishments that I'm proud of
Good teamwork! Connecting the backend, frontend and database took us more time than we expected but now we have a full stack project completed. (starting from scratch 36 hours ago!)
## What's next for StyleList
In the next steps, we want to help people who wonders "what should I wear today" in the morning with a simple one click page, where they fill in the weather and plan for the day then StyleList will provide a suggested outfit from head to toe!
|
losing
|
## Inspiration
The whiteboard or chalkboard is an essential tool in instructional settings - to learn better, students need a way to directly transport code from a non-text medium to a more workable environment.
## What it does
Enables someone to take a picture of handwritten or printed text converts it directly to code or text on your favorite text editor on your computer.
## How we built it
On the front end, we built an app using Ionic/Cordova so the user could take a picture of their code. Behind the scenes, using JavaScript, our software harnesses the power of the Google Cloud Vision API to perform intelligent character recognition (ICR) of handwritten words. Following that, we applied our own formatting algorithms to prettify the code. Finally, our server sends the formatted code to the desired computer, which opens it with the appropriate file extension in your favorite IDE. In addition, the client handles all scripting of minimization and fileOS.
## Challenges we ran into
The vision API is trained on text with correct grammar and punctuation. This makes recognition of code quite difficult, especially indentation and camel case. We were able to overcome this issue with some clever algorithms. Also, despite a general lack of JavaScript knowledge, we were able to make good use of documentation to solve our issues.
## Accomplishments that we're proud of
A beautiful spacing algorithm that recursively categorizes lines into indentation levels.
Getting the app to talk to the main server to talk to the target computer.
Scripting the client to display final result in a matter of seconds.
## What we learned
How to integrate and use the Google Cloud Vision API.
How to build and communicate across servers in JavaScript.
How to interact with native functions of a phone.
## What's next for Codify
It's feasibly to increase accuracy by using the Levenshtein distance between words. In addition, we can improve algorithms to work well with code. Finally, we can add image preprocessing (heighten image contrast, rotate accordingly) to make it more readable to the vision API.
|
## Inspiration
**Introducing Ghostwriter: Your silent partner in progress.** Ever been in a class where resources are so hard to come by, you find yourself practically living at office hours? As teaching assistants on **increasingly short-handed course staffs**, it can be **difficult to keep up with student demands while making long-lasting improvements** to your favorite courses.
Imagine effortlessly improving your course materials as you interact with students during office hours. **Ghostwriter listens intelligently to these conversations**, capturing valuable insights and automatically updating your notes and class documentation. No more tedious post-session revisions or forgotten improvement ideas. Instead, you can really **focus on helping your students in the moment**.
Ghostwriter is your silent partner in educational excellence, turning every interaction into an opportunity for long-term improvement. It's the invisible presence that delivers visible results, making continuous refinement effortless and impactful. With Ghostwriter, you're not just tutoring or bug-bashing - **you're evolving your content with every conversation**.
## What it does
Ghostwriter hosts your class resources, and supports searching across them in many ways (by metadata, semantically by content). It allows adding, deleting, and rendering markdown notes. However, Ghostwriter's core feature is in its recording capabilities.
The record button starts a writing session. As you speak, Ghostwriter will transcribe and digest your speech, decide whether it's worth adding to your notes, and if so, navigate to the appropriate document and insert them at a line-by-line granularity in your notes, integrating seamlessly with your current formatting.
## How we built it
We used Reflex to build the app full-stack in Python, and support the various note-management features including addition, deleting, selecting, and rendering. As notes are added to the application database, they are also summarized and then embedded by Gemini 1.5 Flash-8B before being added to ChromaDB with a shared key. Our semantic search is also powered by Gemini-embedding and ChromaDB.
The recording feature is powered by Deepgram's threaded live-audio transcription API. The text is processed live by Gemini, and chunks are sent to ChromaDB for queries. Distance metrics are used as thresholds to not create notes, add to an existing note, or create a new note. In the latter two cases, llama3-70b-8192 is run through Groq to write on our (existing) documents. It does this through a RAG on our docs, as well as some prompt-engineering. To make insertion granular we add unique tokens to identify candidate insertion-points throughout our original text. We then structurally generate the desired markdown, as well as the desired point of insertion, and render the changes live to the user.
## Challenges we ran into
Using Deepgram and live-generation required a lot of tasks to run concurrently, without blocking UI interactivity. We had some trouble reconciling the requirements posed by Deepgram and Reflex on how these were handled, and required us redesign the backend a few times.
Generation was also rather difficult, as text would come out with irrelevant vestiges and explanations. It took a lot of trial and error through prompting and other tweaks to the generation calls and structure to get our required outputs.
## Accomplishments that we're proud of
* Our whole live note-generation pipeline!
* From audio transcription process to the granular retrieval-augmented structured generation process.
* Spinning up a full-stack application using Reflex (especially the frontend, as two backend engineers)
* We were also able to set up a few tools to push dummy data into various points of our process, which made debugging much, much easier.
## What's next for GhostWriter
Ghostwriter can work on the student-side as well, allowing a voice-interface to improving your own class notes, perhaps as a companion during lecture. We find Ghostwriter's note identification and improvement process very useful ourselves.
On the teaching end, we hope GhostWriter will continue to grow into a well-rounded platform for educators on all ends. We envision that office hour questions and engagement going through our platform can be aggregated to improve course planning to better fit students' needs.
Ghostwriter's potential doesn't stop at education. In the software world, where companies like AWS and Databricks struggle with complex documentation and enormous solutions teams, Ghostwriter shines. It transforms customer support calls into documentation gold, organizing and structuring information seamlessly. This means fewer repetitive calls and more self-sufficient users!
|
## 💡Inspiration
Sometimes it's hard to sit in front of a screen all day and write code. It definitely takes a **toll on your health**. So why not go *old-school* and write the code on a pen and paper so that it doesn't affect your Health.
Also this can be helpful for the students who are getting on with coding and have *lack of resources* (like laptops). So they have to use their mobile phone to write and compile their code **which takes a lot of time and effort**.
## 💻What it does
**Codifier** is a web based code converter which can translate your *handwritten code* into *actual typed code* using **OCR** in just a few seconds. Don't worry about typing all the code, just scan and upload and *Et-voilà!* It also Compiles the code in various languages like **C,C++,Java,Python**.
## 🔷Steps to Use
1. Click on get Started.
2. Select the option to either click a photo or choose from gallery.
3. Upload your Scanned code and click on **Convert**.
* This process will take you to the compiler.
4. Select your programming language.
5. After getting your code on the screen, you can edit the code if you want.
6. Click the Compile button to compile and run it into the output screen.
## 🔨How we built it
* React Js: For Frontend
* Tailwind CSS: For styling
* Figma: For Designing. (You can see it [here](https://www.figma.com/proto/geIJeadw8u9XAQ4BrdAMp2/Pictocode?node-id=2%3A2&starting-point-node-id=2%3A2))
* Node Js: For Backend Server
* API Google Cloud Vision and [JDoodle API](https://docs.jdoodle.com/compiler-api/compiler-api)
## 💻AI Code Recognition Challenges
Codifier uses Google Cloud Vision API for recognizing the handritten code and converting into typed code, It also uses the JDoodle API to compile and run the code into different programming languages like **C,C++,Java,Python**.
## 🧠Use of Google Cloud Vision API
* Handritten code Recognition.
* Conversion into Type code.
## 🏅Challenges We ran into
* Communication gap due to Online Hackathons.
* The Biggest Challenge we ran into was to configure the API to recognize and convert the code.
* Compiling and running the code into different languages.
## 📖What we learnt
1. Team Building
2. Working with:
3. React Js
4. Figma
5. Multer
6. Git and Github
7. APIs
8. PWA (Progressive Web Applications)
## 🚀What's Next for Codifier
* Adding the Camera feature for recognizing the code as a built-in feature.
* Authenticating the user and storing their code in the Database.
* Visualizing the code.
* Adding input option in code compilation for user.
|
winning
|
## Inspiration
Tax Simulator 2019 takes inspiration from a variety of different games, such as the multitude of simulator games that have gained popularity in recent years, the board game Life, and the video game Pokemon.
## What it does
Tax Simulator is a video game designed to introduce students to taxes and benefits.
## How we built it
Tax Simulator was built in Unity using C#.
## Challenges we ran into
The biggest challenge that we had to overcome was time. Creating the tax calculation system, designing and building the game's levels, implementing the narrative text elements, and debugging every single area of the program were all tedious and demanding tasks, and as a result, there are several features of the game that have not yet been fully implemented, such as the home-purchasing system.
Learning how to use the Unity game engine also proved to be a challenge as not all of us had past experience with the software, so picking up the skills to implement our ideas into our creation and develop a fleshed-out product was an essential yet difficult task.
## Accomplishments that we're proud of
Although simple, Tax Simulator incorporates concepts such as common tax deductions and two savings vehicles in a fun and interactive game. The game makes use of a charming visual aesthetic, simple mechanics, and an engaging narrative that makes it fun to play through, and we're very proud of our ability to portray learning and education in an appealing way.
## What we learned
We learned that although it is tempting to try and incorporate as many features as possible in our project, a simple game that is easy to understand and fun to play will keep players engaged better than a game with many complex features and options that ultimately contribute to confusion and clutter.
## What's next for Tax Simulator 2019
Although it is a great start for learning about taxes, Tax Simulator could benefit from incorporating more life events such, purchasing a house with the First-Time Home Buyer Incentive or having kids, and saving for college with RESPs. The game could also suggest ways for players to improve their gameplay based on how the decisions they made regarding their taxes.
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
## Inspiration
We were inspired by Duolingo which makes learning languages fun! Physics is often a difficult subject for many students, so we wanted to revisit some concepts that would incorporate the physics curriculum in a fun way using simulation 🤓
## What it does
Our web app has a dashboard of user-interactive simulations that act as physics lessons. We wanted to make online physics simulations more interesting, so we implemented a working fruit-themed projectile motion game that allows you to change the angle and speed of a tomato which splatters when it falls.
## How we built it
We built Galileo using Matter.js, a physics javascript library, HTML, CSS, frameworks such as Astro and Tailwind CSS. Figma was used for wireframing, planning, as well as prototyping. We used an Astro template to get our landing page started.
## Challenges we ran into
Implementing Matter.js and figuring out how to make physics interesting to learn. We also struggled with combining logic and the Astro templates.
## Accomplishments that we're proud of
Our newest member was a first year engineering student and a hackathon beginner 🤫 🧏♂️ 🧏♂️ We also all tried something outside of our comfort zones and learned a lot from eachother!
## What we learned
SO proud of our tomatos 🍅 ❤️
Using Javascript libraries, physics rules and logic in coding
## What's next for Galileo (Dynasim)
Potentially creating more games and trying more physics libraries. Adding a review feature.
|
partial
|
## Inspiration
In a grocery store, you are presented with thousands of options for food. To adequately research each of these to make the most environmentally-friendly decision would be a full-time job. Nobody has that kind of time.
## What it does
Scan the barcode (CV food scanning is planned) of a product you find in a store to receive a list of ingredients it contains that have a high carbon footprint.
## How we built it
We built the front-end in ReactJS, using Verbwire IPFS file storage for the backend information. The food information comes from the OpenFoodFacts API and the barcode scanning is done by QuaggaJS. Barcode scans come in from QuaggaJS, are fed to OpenFoodFacts, the returned information is searched for keywords that overlap with high-impact foods that is retrieved from Verbwire IPFS storage.
## Challenges we ran into
While building this project, we started with the idea of searching by manufacturer's name, to find the carbon footprint of specific companies. It became impossible to generalize this to many different products and we found another approach. Searching by ingredients is much more generalized and allows the program to deliver information on a product that we may have never heard of.
## Accomplishments that we're proud of
We're very proud that this application is decentralized. There is no backend server that is required for the system to run. If someone has a copy of the website, they will be able to access our database of high-impact ingredients as it is stored in IPFS blockchain storage.
## What we learned
We learned a lot about linking multiple APIs together, with all the different requirements of each, to build something new. We also built a program with TensorFlow computer vision to recognize foods without barcodes, but was ultimately unable to implement this feature into the website before deadline. Barcodes already exist on most foods and is a reliable enough method for an MVP.
## What's next for Pi
Given more time (just a few more hours!), our team would have been able to combine our TensorFlow Python application into our web server to recognize foods without barcode stickers. We would also look into hosting more of our application on blockchain to reduce the centralization and reliance on a server to present the web client.
|
## Inspiration
Food is a basic human need. As someone who often finds themselves wandering the aisles of Target, I know firsthand how easy it is to get lost among the countless products and displays. The experience can quickly become overwhelming, leading to forgotten items and a less-than-efficient shopping trip. This project was born from the desire to transform that chaos into a seamless shopping experience. We aim to create a tool that not only helps users stay organized with their grocery lists but also guides them through the store in a way that makes shopping enjoyable and stress-free.
## What it does
**TAShopping** is a smart grocery list app that records your grocery list in an intuitive user interface and generates a personalized route in **(almost)** any Target location across the United States. Users can easily add items to their lists, and the app will optimize their shopping journey by mapping out the most efficient path through the store.
## How we built it
* **Data Aggregation:** We utilized `Selenium` for web scraping, gathering product information and store layouts from Target's website.
* **Object Storage:** `Amazon S3` was used for storing images and other static files related to the products.
* **User Data Storage:** User preferences and grocery lists are securely stored using `Google Firebase`.
* **Backend Compute:** The backend is powered by `AWS Lambda`, allowing for serverless computing that scales with demand.
* **Data Categorization:** User items are classified with `Google Gemini`
* **API:** `AWS API Endpoint` provides a reliable way to interact with the backend services and handle requests from the front end.
* **Webapp:** The web application is developed using `Reflex`, providing a responsive and modern interface for users.
* **iPhone App:** The iPhone application is built with `Swift`, ensuring a seamless experience for iOS users.
## Challenges we ran into
* **Data Aggregation:** Encountered challenges with the rigidity of `Selenium` for scraping dynamic content and navigating web page structures.
* **Object Storage:** N/A (No significant issues reported)
* **User Data Storage:** N/A (No significant issues reported)
* **Backend Compute:** Faced long compute times; resolved this by breaking the Lambda function into smaller, more manageable pieces for quicker processing.
* **Backend Compute:** Dockerized various builds to ensure compatibility with the AWS Linux environment and streamline deployment.
* **API:** Managed the complexities of dealing with and securing credentials to ensure safe API access.
* **Webapp:** Struggled with a lack of documentation for `Reflex`, along with complicated Python dependencies that slowed development.
* **iPhone App:** N/A (No significant issues reported)
## Accomplishments that we're proud of
* Successfully delivered a finished product with a relatively good user experience that has received positive feedback.
* Achieved support for hundreds of Target stores across the United States, enabling a wide range of users to benefit from the app.
## What we learned
>
> We learned a lot about:
>
>
> * **Gemini:** Gained insights into effective data aggregation and user interface design.
> * **AWS:** Improved our understanding of cloud computing and serverless architecture with AWS Lambda.
> * **Docker:** Mastered the process of containerization for development and deployment, ensuring consistency across environments.
> * **Reflex:** Overcame challenges related to the framework, gaining hands-on experience with Python web development.
> * **Firebase:** Understood user authentication and real-time database capabilities through Google Firebase.
> * **User Experience (UX) Design:** Emphasized the importance of intuitive navigation and clear presentation of information in app design.
> * **Version Control:** Enhanced our collaboration skills and code management practices using Git.
>
>
>
## What's next for TAShopping
>
> There are many exciting features on the horizon, including:
>
>
> * **Google SSO for web app user data:** Implementing Single Sign-On functionality to simplify user authentication.
> * **Better UX for grocery list manipulation:** Improving the user interface for adding, removing, and organizing items on grocery lists.
> * **More stores:** Expanding support to additional retailers, including Walmart and Home Depot, to broaden our user base and shopping capabilities.
>
>
>
|
## Inspiration
You're at a restaurant and you want to quickly split the bill. It's frustrating to have everyone pull out their cards and cash to pay, or even simply working out the amounts to pay. We aim to simplify that with a mobile app.
## What it does
Facture Fracture leverages the power of OCR using the Microsoft Computer Vision API to process an image of your bill, allow people to join in your bill, let people decide how to pay and additionally uses Interac to send payment requests
## How we built it
The app itself is built with react native. It communicates with our Flask backend built with python on a Microsoft Azure WebApp, which itself communicates with the Microsoft Computer Vision API.
## Challenges we ran into
Handling communications between the app and the backend was hard as we had to understand how Http Requests are sent and received, and how to make sure the file sent by the app, in this case the image. was properly handled by the server.
We also ran into issues developing the app being new to mobile app development
## Accomplishments that we're proud of
* Being able to upload a picture from our phones to the cloud (Microsoft Azure)
* Being able to analyze a picture of a bill
* Being able to communicate between our phones, the backend, and Microsoft services
## What we learned
We learned to first look at multiple tutorials to find a solution since the first answer isn't always applicable to our problem.
We also learned to seek help when stuck because although another person might not have the answer to our problem, they can provide us with insight on how to solve the issue
We also learned more about interacting with different services using requests
## What's next for Facture Fracture
We truly believe that this app is useful to people as it came to us from our frustrating experiences eating out in groups. It also makes reimbursements much easier, as well as cashflow, since sending an interac request tells the participants exactly how much they need to be, and enables them to quickly repay the host!
|
partial
|
## Inspiration
We all like remote desktop services, but what if you want to do remote desktop in the terminal? Well now you can!
## What it does
Allows you to access a remote computer by printing a graphical representation of the remote desktop to an ANSI-compliant terminal. Also accepts keyboard and mouse input.
## How I built it
Kotlin and reading lots of very old documentation about shells.
## Challenges I ran into
Parsing and outputting ANSI codes, and converting keyboard input into keystrokes (much less straightforward than it seems). Also, getting the remote server to have a window system.
## Accomplishments that I'm proud of
See Challenges section
## What I learned
Shells are extremely particular about input and output.
|
## Inspiration
We wanted to make something that's functional and can find its place in real life. Initially we decided to use Arduino to make an encrypted texting device, but eventually we found that it's fun to make a toy - something we want to play with as well!
## What it does
We made a remote-controlled car that has a webcam attached. It is able to launch the webcam capture application on startup, and send the video stream over either local network or internet to a compatible browser (tested with Firefox). It can also be controlled remotely using a browser. While the top speed isn't very fast, it's definitely good for watching your house when you're away!
## How we built it
We wanted to challenge ourselves, and what better way to do it than using a hardware and learning a software that we've never used before! Most of our team has prior Arduino experience, but we opted to use Raspberry Pie, believing that it will offer more flexibility and power. We started with a rough outline of the chassis and the components to include, then went about programming the magic that makes all these happen. We looked into different control ideas such as streaming to an Android phone or using a web server, and in the end decided that making our robot available on the web is much more versatile. Then, it's many hours spent on troubleshooting and fine-tuning, and wrapped up with this mornings final assembly.
## Challenges we ran into
We had a really difficult catch-22 problem that seem simple at first: we required two separate terminal applications to be launched at startup, but referencing the Python files in the startup file means that we can only run one application at a time; however, the two programs cannot be run in the same terminal. We troubleshooted for 2 hours, in the end created many shell scripts that called each other in order to launch two separate terminal applications.
## Accomplishments that we're proud of
We're definitely proud of the fact that we have (mostly) finished what we set out to do! Integrating software and hardware like this is fun and rewarding, and we all jumped in joy when the robot first moved! We're also very proud to solve the one-application launch issue above, and are very glad that we have taken part in this Hackathon!
## What we learned
We learned a lot - all of us started as Pi beginners, but we gained much better understanding of this board by using it for 24 hours. We also learned to integrate multiple programming languages together, something that we have never done. It's definitely something extremely useful and rewarding!
## What's next for The Roaming Watchman
Well... Because all of the parts are sponsored, we have to let the Watchman go. However, the next time we're at MakeUofT, we are looking forward to make something even better!
|
## Inspiration
Since the advent of cloud based computing, personal computers have started to become less and less powerful, to the point where they have started to become little more than web viewers. While this has lowered costs, and consequently more people have access to computers, people have less freedom to run the programs that they want to and are limited to using applications that large companies, who are usually very disconnected from their users, decide they can run.
This is where we come in.
## What it does
Our project allows people to to connect to a wifi network, but instead of getting access to just the internet, they also get access to a portal where they can run code on powerful computers
For example, a student can come to campus and connect to the network, and they instantly have a way to run their projects, or train their neural networks with much more power than their laptop can provide.
## How we built it
We used used Django and JavaScript for the the interface that the end user accesses. We used python and lots of bash scripts to get stuff working on our servers, on both the low cost raspberry pis, and the remote computer that does most of the procesing.
## Challenges we ran into
We had trouble sand boxing code and setting limits on how much compute time one person has access to. We also had issues with lossy compression.
## Accomplishments that we're proud of
Establishing asynchronous connections between 3 or more different computers at once.
Managed to gain access to our server after I disabled passwords but forgot to copy over my ssh keys.
## What we learned
How to not mess up permissions, how to manage our very limited time even though we're burn't out.
## What's next for Untitled Compute Power Sharing Thing
We intend to fix a few small security issues and add support for more programming languages.
|
losing
|
**check out the project demo during the closing ceremony!**
<https://youtu.be/TnKxk-GelXg>
## Inspiration
On average, half of patients with chronic illnesses like heart disease or asthma don’t take their medication. Reports estimates that poor medication adherence could be costing the country $300 billion in increased medical costs.
So why is taking medication so tough? People get confused and people forget.
When the pharmacy hands over your medication, it usually comes with a stack of papers, stickers on the pill bottles, and then in addition the pharmacist tells you a bunch of mumble jumble that you won’t remember.
<http://www.nbcnews.com/id/20039597/ns/health-health_care/t/millions-skip-meds-dont-take-pills-correctly/#.XE3r2M9KjOQ>
## What it does
The solution:
How are we going to solve this? With a small scrap of paper.
NekoTap helps patients access important drug instructions quickly and when they need it.
On the pharmacist’s end, he only needs to go through 4 simple steps to relay the most important information to the patients.
1. Scan the product label to get the drug information.
2. Tap the cap to register the NFC tag. Now the product and pill bottle are connected.
3. Speak into the app to make an audio recording of the important dosage and usage instructions, as well as any other important notes.
4. Set a refill reminder for the patients. This will automatically alert the patient once they need refills, a service that most pharmacies don’t currently provide as it’s usually the patient’s responsibility.
On the patient’s end, after they open the app, they will come across 3 simple screens.
1. First, they can listen to the audio recording containing important information from the pharmacist.
2. If they swipe, they can see a copy of the text transcription. Notice how there are easy to access zoom buttons to enlarge the text size.
3. Next, there’s a youtube instructional video on how to use the drug in case the patient need visuals.
Lastly, the menu options here allow the patient to call the pharmacy if he has any questions, and also set a reminder for himself to take medication.
## How I built it
* Android
* Microsoft Azure mobile services
* Lottie
## Challenges I ran into
* Getting the backend to communicate with the clinician and the patient mobile apps.
## Accomplishments that I'm proud of
Translations to make it accessible for everyone! Developing a great UI/UX.
## What I learned
* UI/UX design
* android development
|
## Inspiration
40 million people in the world are blind, including 20% of all people aged 85 or older. Half a million people suffer paralyzing spinal cord injuries every year. 8.5 million people are affected by Parkinson’s disease, with the vast majority of these being senior citizens. The pervasive difficulty for these individuals to interact with objects in their environment, including identifying or physically taking the medications vital to their health, is unacceptable given the capabilities of today’s technology.
First, we asked ourselves the question, what if there was a vision-powered robotic appliance that could serve as a helping hand to the physically impaired? Then we began brainstorming: Could a language AI model make the interface between these individual’s desired actions and their robot helper’s operations even more seamless? We ended up creating Baymax—a robot arm that understands everyday speech to generate its own instructions for meeting exactly what its loved one wants. Much more than its brilliant design, Baymax is intelligent, accurate, and eternally diligent.
We know that if Baymax was implemented first in high-priority nursing homes, then later in household bedsides and on wheelchairs, it would create a lasting improvement in the quality of life for millions. Baymax currently helps its patients take their medicine, but it is easily extensible to do much more—assisting these same groups of people with tasks like eating, dressing, or doing their household chores.
## What it does
Baymax listens to a user’s requests on which medicine to pick up, then picks up the appropriate pill and feeds it to the user. Note that this could be generalized to any object, ranging from food, to clothes, to common household trinkets, to more. Baymax responds accurately to conversational, even meandering, natural language requests for which medicine to take—making it perfect for older members of society who may not want to memorize specific commands. It interprets these requests to generate its own pseudocode, later translated to robot arm instructions, for following the tasks outlined by its loved one. Subsequently, Baymax delivers the medicine to the user by employing a powerful computer vision model to identify and locate a user’s mouth and make real-time adjustments.
## How we built it
The robot arm by Reazon Labs, a 3D-printed arm with 8 servos as pivot points, is the heart of our project. We wrote custom inverse kinematics software from scratch to control these 8 degrees of freedom and navigate the end-effector to a point in three dimensional space, along with building our own animation methods for the arm to follow a given path. Our animation methods interpolate the arm’s movements through keyframes, or defined positions, similar to how film editors dictate animations. This allowed us to facilitate smooth, yet precise, motion which is safe for the end user.
We built a pipeline to take in speech input from the user and process their request. We wanted users to speak with the robot in natural language, so we used OpenAI’s Whisper system to convert the user commands to text, then used OpenAI’s GPT-4 API to figure out which medicine(s) they were requesting assistance with.
We focused on computer vision to recognize the user’s face and mouth. We used OpenCV to get the webcam live stream and used 3 different Convolutional Neural Networks for facial detection, masking, and feature recognition. We extracted coordinates from the model output to extrapolate facial landmarks and identify the location of the center of the mouth, simultaneously detecting if the user’s mouth is open or closed.
When we put everything together, our result was a functional system where a user can request medicines or pills, and the arm will pick up the appropriate medicines one by one, feeding them to the user while making real time adjustments as it approaches the user’s mouth.
## Challenges we ran into
We quickly learned that working with hardware introduced a lot of room for complications. The robot arm we used was a prototype, entirely 3D-printed yet equipped with high-torque motors, and parts were subject to wear and tear very quickly, which sacrificed the accuracy of its movements. To solve this, we implemented torque and current limiting software and wrote Python code to smoothen movements and preserve the integrity of instruction.
Controlling the arm was another challenge because it has 8 motors that need to be manipulated finely enough in tandem to reach a specific point in 3D space. We had to not only learn how to work with the robot arm SDK and libraries but also comprehend the math and intuition behind its movement. We did this by utilizing forward kinematics and restricted the servo motors’ degrees of freedom to simplify the math. Realizing it would be tricky to write all the movement code from scratch, we created an animation library for the arm in which we captured certain arm positions as keyframes and then interpolated between them to create fluid motion.
Another critical issue was the high latency between the video stream and robot arm’s movement, and we spent much time optimizing our computer vision pipeline to create a near instantaneous experience for our users.
## Accomplishments that we're proud of
As first-time Hackathon participants, we are incredibly proud of the incredible progress we were able to make in a very short amount of time, proving to ourselves that with hard work, passion, and a clear vision, anything is possible. Our team did a fantastic job embracing the challenge of using technology unfamiliar to us, and stepped out of our comfort zones to bring our idea to life. Whether it was building the computer vision model, or learning how to interface the robot arm’s movements with voice controls, we ended up building a robust prototype which far surpassed our initial expectations. One of our greatest successes was coordinating our work so that each function could be pieced together and emerge as a functional robot. Let’s not overlook the success of not eating our hi-chews we were using for testing!
## What we learned
We developed our skills in frameworks we were initially unfamiliar with such as how to apply Machine Learning algorithms in a real-time context. We also learned how to successfully interface software with hardware - crafting complex functions which we could see work in 3-dimensional space. Through developing this project, we also realized just how much social impact a robot arm can have for disabled or elderly populations.
## What's next for Baymax
Envision a world where Baymax, a vigilant companion, eases medication management for those with mobility challenges. First, Baymax can be implemented in nursing homes, then can become a part of households and mobility aids. Baymax is a helping hand, restoring independence to a large disadvantaged group.
This innovation marks an improvement in increasing quality of life for millions of older people, and is truly a human-centric solution in robotic form.
|
## Inspiration
Our members enjoy many forms of trading card games and collectables, and have found ourselves troubled with sorting our collections in an organized fashion, yet as quickly as possible. We then came up with a solution for collectors of all kinds, the CSISx100!
## What it does
The CSISx100 is a device where you slot up to 100 trading cards. The device then feeds them, one at a time, into a chamber where a camera takes an image of the front of the card, recognizes key features of the card to identify it among the collection, then stores those features in a database readable by the user.
## How we built it
Using available materials, we created a motorized feeder into a camera 'slot' where a camera captures the image of the card. The motor is powered and controlled via Arduino, and the rest is controlled via python program on the host computer.
## Challenges we ran into
The main challenge our team faced was the image detection of the card name. For our test cards, we used Magic: The Gathering cards, which can be identified via Title and Set ID. The libraries we used in the beginning were unreliable and spotty with results, and led to us eventually upgrading and using Google Cloud services for Image to Text detection. Another challenge was detecting the Set symbol among the rest of the features on the card, as well as utilizing image processing to separate the symbol to compare it versus other pre-stored data images.
## Accomplishments that we're proud of
Our largest accomplishment is creating a working, compact initial prototype from scratch using the limited materials available to us at HackHarvard. Our team also had to face many steep learning curves, as we are new to image processing and using hardware in conjunction with software to create a working product.
## What we learned
We learned many things, some of which include image processing via python, team coordination, and design and execution of a new product. These things will allow us to expand our knowledge in the future and increase our competitive nature at future hackathons.
## What's next for Card counter 100
Our team plans to take our product to the next level, upgrading from scavenged materials found at HackHarvard to a 3D printed model that is a much better representation of our product. We also plan on improving our code to use for our personal collections, as well as to expand to other card games and collections, ending with a patented product that will be sold to consumers world-wide.
|
winning
|
# Highlights
A product of [YHack '16](http://www.yhack.org/). Built by Aaron Vontell, Ali Benlalah & Cooper Pellaton.
## Table of Contents
* [Overview](#overview)
* [Machine Learning and More](#machine-learning-and-more)
* [Our Infrastructure](#our-infrastructure)
* [API](#api)
## Overview
The first thing you're probably thinking is what this ambiguiously named application is, and secondly, you're likely wondering why it has any significance. Firstly, Highlights is the missing component of your YouTube life, and secondly it's important because we leverage Machine Learning to find out what content is most important in a particular piece of media unlike it has ever been done before.
Imagine this scenario: you subscribe to 25+ YouTube channels but over the past 3 weeks you simply haven't had the time to watch videos because of work. Today, you decide that you want to watch one of your favorite vloggers, but realize you might lack the context to understand what has happened in her/his life since you last watched which lead her/him to this current place. Here enters Highlights. Simply download the Android application, log in with your Google credentials and you will be able to watch the so called *highlights* of your subscriptions for all of the videos which you haven't seen. Rather than investing hours in watching your favorite vlogger's past weeks worth of videos, you can get caught up in 30 seconds - 1 minute by simply being presented with all of the most important content in those videos in one place, seamlessly.
## Machine Learning and More
Now that you understand the place and signifiance of Highlights, a platform that can distill any media into bite sized chunks that can be consumed quickly in the order of their importance, it is important to explain the technical details of how we achieve such a gargantuant feat.
Let's break down the pipeline.
1. We start by accessing your Google account within the YouTube scope and get a list of your current subscriptions, 'activities' such as watched videos, comments, etc., your recommended videos and your home feed.
2. We take this data and extract the key features from it. Some of these include:
* The number of videos watched on a particular channel.
* The number of likes/dislikes you have and the categories on which they center.
* The number of views a particular video has/how often you watch videos after they have been posted.
* Number of days after publication. This is most important in determing the signficiance of a reccomended video to a particular user.
We go about this process for every video that the user has watched, or which exists in his or her feed to build a comprehensive feature set of the videos that are in their own unique setting.
3. We proceed by feeding the data from the aforementioned investigation and probabilities by then generating a new machine learning model which we use to determine the likelihood of a user watching any particular reccomended video, etc.
4. For each video in the set we are about to iterate over, the video is either a recomended watch, or a video in the user's feed which she/he has not seen. They key to this process is a system we like to call 'video quanitization'. In this system we break each video down into it's components. We look at the differences between images and end up analyzing something near to every other 2, 3, or 4 frames in a video. This reduces the size of the video that we need to analyze while ensuring that we don't miss anything important. As you will not here, a lot of the processes we undertake have bases in very comprehensive and confusing mathematics. We've done our best to keep math out of this, but know that one of the most important tools in our toolset is the exponential moving average.
5. This is the most important part of our entire process, the scene detection. To distill this down to it's most basic principles we use features like lighting, edge/shape detection and more to determine how similar or different every frame is from the next. Using this methodology of trying to find the frames that are different we coin this change in setting a 'scene'. Now, 'scenes' by themselves are not exciting but coupled with our knowledge of the context of the video we are analyzing we can come up with very apt scenes. For instance, in a horror movie we know that we would be looking for something like 5-10 seconds in differences between the first frame of that series and the last frame; this is what is referred to as a 'jump' or 'scare' cut. So using our exponential moving average, and background subtraction we are able to figure out the changes in between, and validate scenes.
6. We pass this now deconstructed video into the next part of our pipeline where we will generate unique vectors for each of them that will be used in the next stage. What we are looking for here is the key features that define a frame. We are trying to understand, for example, what makes a 'jump' cut a 'jump' cut. Features that we are most commonly looking include
* Intensity of an analyzed area.
+ EX: The intensity of a background coloring vs edges, etc.
* The length of each scence.
* Background.
* Speed.
* Average Brightness
* Average background speed.
* Position
* etc.
Armed with this information we are able to derive a unqiue column vector for each scence which we will then feed into our neural net.
7. The meat and bones of our operation: the **neural net**! What we do here is not terribly complicated. At it's most basic principles, we take each of the above column vectors and feed it into this specialized machine learning model. What we are looking for is to derive a sort order for these features. Our initial training set, a group of 600 YouTube videos which @Ali spent a significant amount of time training, is used to help to advance this net. The gist of what we are trying to do is this: given a certain vector, we want to determine it's signifiance in the context of the YouTube univerise in which each of our users lives. To do this we abide by a semi-supervised learning model in which we are looking over the shoulder of the model to check the output. As time goes on, this model begins to tweak it's own parameters and produce the best possible output given any input vector.
8. Lastly, now having a sorted order of every scene in a user's YouTube universe, we go about reconstructing the top 'highlights' for each user. IE in part 7 of our pipeline we figured out which vectors carried the greatest weight. Now we want to turn these back into videos that the user can watch, quickly, and derive the greatest meaning from. Using a litany of Google's APIs we will turn the videoIds, categories, etc into parameterized links which the viewer is then shown within our application.
## Our Infrastructure
Our service is currently broken down into the following core components:
* Highlights Android Application
+ Built and tested on Android 7.0 Nougat, and uses the YouTube Android API Sample Project
+ Also uses various open source libraries (OkHTTP, Picasso, ParallaxEverywhere, etc...)
* Highlights Web Service (Backs the Pipeline)
* The 'Highlighter' or rather our ML component
## API
### POST
* `/api/get_subscriptions`
This requires the client to `POST` a body of the nature below. This will then trigger the endpoint to go and query the YouTube API for the user's subscriptions, and then build a list of the most recent videos which he/she has not seen yet.
```
{
"user":"Cooper Pellaton"
}
```
* `/api/get_videos`
*DEPRECATED*. This endpoint requires the client to `POST` a body similar to that below and then will fetch the user's most recent activity in list form from the YouTube API.
```
{
"user":"Cooper Pellaton"
}
```
### GET
* `/api/fetch_oauth`
So optimally, what should happen when you call this method is that the user should be prompted to enter her/his Google credentials to authorize the application to then be able to access her/his YouTube account.
- The way that this is currently architected, the user's entrance into our platform will immediately trigger learning to occur on their videos. We have since *DEPRECATED* our ML training endpoint in favor of one `GET` endpoint to retrieve this info.
* `/api/fetch_subscriptions`
To get the subscriptions for a current user in list form simply place a `GET` to this endpoint. Additionally, a call here will trigger the ML pipeline to begin based on the output of the subscriptions and user data.
* `/api/get_ml_data`
For each user there is a queue of their Highlights. When you query this endpoint the response will be the return of a dequeue operation on said queue. Hence, you are guaranteed to never have overlap or miss a video.
- To note: in testing we have a means to bypass the dequeue and instead append, constantly, directly to the queue so that you can ensure you are retrieving the appropriate response.
|
## Inspiration
The amount of data in the world today is mind-boggling. We are generating 2.5 quintillion bytes of data every day at our current pace, but the pace is only accelerating with the growth of IoT.
We felt that the world was missing a smart find-feature for videos. To unlock heaps of important data from videos, we decided on implementing an innovative and accessible solution to give everyone the ability to access important and relevant data from videos.
## What it does
CTRL-F is a web application implementing computer vision and natural-language-processing to determine the most relevant parts of a video based on keyword search and automatically produce accurate transcripts with punctuation.
## How we built it
We leveraged the MEVN stack (MongoDB, Express.js, Vue.js, and Node.js) as our development framework, integrated multiple machine learning/artificial intelligence techniques provided by industry leaders shaped by our own neural networks and algorithms to provide the most efficient and accurate solutions.
We perform key-word matching and search result ranking with results from both speech-to-text and computer vision analysis. To produce accurate and realistic transcripts, we used natural-language-processing to produce phrases with accurate punctuation.
We used Vue to create our front-end and MongoDB to host our database. We implemented both IBM Watson's speech-to-text API and Google's Computer Vision API along with our own algorithms to perform solid key-word matching.
## Challenges we ran into
Trying to implement both Watson's API and Google's Computer Vision API proved to have many challenges. We originally wanted to host our project on Google Cloud's platform, but with many barriers that we ran into, we decided to create a RESTful API instead.
Due to the number of new technologies that we were figuring out, it caused us to face sleep deprivation. However, staying up for way longer than you're supposed to be is the best way to increase your rate of errors and bugs.
## Accomplishments that we're proud of
* Implementation of natural-language-processing to automatically determine punctuation between words.
* Utilizing both computer vision and speech-to-text technologies along with our own rank-matching system to determine the most relevant parts of the video.
## What we learned
* Learning a new development framework a few hours before a submission deadline is not the best decision to make.
* Having a set scope and specification early-on in the project was beneficial to our team.
## What's next for CTRL-F
* Expansion of the product into many other uses (professional education, automate information extraction, cooking videos, and implementations are endless)
* The launch of a new mobile application
* Implementation of a Machine Learning model to let CTRL-F learn from its correct/incorrect predictions
|
## Inspiration
We spend a noticeable amount of our weekly time watching YouTube videos, be it for entertainment, education, or exploring our interests. In most cases, the overall intent is to obtain some form of information from the video. We were seeking a solution to increase the efficiency of this "information extraction" process as YouTube's speed adjustment option is the only relevant tool. And so we decided to develop YouTube Summarizer!
## What it does
The summarizer is a Chrome extension that works with YouTube to extract the key points of a video and make them accessible to the user. The summary is customizable per user's request, allowing varying extents of summarization. Key points from the summarization process, together with corresponding time-stamps, are then presented to the user through a small UI next to the video feed. This allows the user to navigate to more important sections of the video, to get to the key points more efficiently.
## How I built it
We first set up a GitHub repo for project management and made a readme to keep track of dependencies and environment information. Our project utilized a Google Chrome extension for the frontend and a Django server for the backend, so we initially developed both parts of the project separately and integrated each service towards the end of the hackathon. We also had to choose which online APIs and services to use as our project progressed, and decided to use Punctuator, Resoomer, and YouTube closed captions as the key technologies involved in the timestamped summary generation.
## Challenges I ran into
Out of the numerous challenges we encountered throughout this hackathon, the most significant challenge was an overestimation of the technologies available to us. Our overestimation of the ability of transcription services forced us to make compromises, while our attempts to get the YouTube player to control playback pressed us to create a hacky solution. Overcoming these challenges and bridging the capabilities of these technologies was an integral part of the project.
## Accomplishments and What I learned
What we learned during this project was the importance of understanding connections between different technology and how important it is to account for possible bugs when incorporating different software packages into the corpus of a final software product. We understood the importance of time management and how important it is to fully grasp the difficulties which may occur and when to change the course of the project based to use our time more efficiently.
Not only did this experience help us create a stronger bond as a team and a group of friends, we also learned how to set up web services using Django, create Google Chrome extensions and implement the Cloud Computing. In addition, we used Javascript, HTML, and CSS to develop Web-Apps and write software packages in python.
## What's next for YouTube Summarizer
Even though our current version of Youtube summarizer is able to provide users with valuable information regarding the videos they watch and can provide a compact summary of the video, we believe that this tools can be further developed to meet the needs of YouTubers.
While we used advanced online services to provide a summary of the description, we believe that we can use a variety of text, audio and video analysis tool to provide valuable information and more accurate summaries for videos. As an example, we attempted to use the online Google Cloud AI services to provide transcripts for videos with no captions and use AWS services to analyze audio files and figure out important sections of the video based on its correlated audio file. Unfortunately, many of these services required more time in order to fully analyze the information and were not incorporated into the final product. In addition, we believe that using Youtube’s video API, we will be able to provide more control for our users and give them more freedom when using this tool.
Overall, we believe that video analysis and summarization is not only important to the individual, but also proves to be more important when considering big data analysis. Hence, we believe the youtube summarizer can be the start of a novel technology in informatics.
|
winning
|
## About
We are team #27 on discord, team members are: anna.m#8841, PawpatrollN#9367, FrozenTea#9601, bromainelettuce#8008.
Domain.com challenge: [beatparty.tech]
## Inspiration
Due to the current pandemic, we decided to create a way for people to exercise in the safety of their home, in a fun and competitive manner.
## What it does
BeatParty is an augmented reality mobile app designed to display targets for the user to hit in a specific pattern to the beat of a song. The app has a leaderboard to promote healthy competition.
## How we built it
We built BeatParty in Unity, using plug-ins from OpenPose, and echoAR's API for some models.
## Challenges we ran into
Without native support from Apple ARkit and Google AR core, on the front camera, we had to instead use OpenPose, a plug-in that would not be able to take full advantage of the phone's processor, resulting in a lower quality image.
## What we learned
**Unity:**
* We learned how to implement libraries onto unity and how to manipulate elements within such folder.
* We learned how to do the basics in Unity, such as making and creating hitboxes
* We learned how to use music and create and destroy gameobjects.
**UI:**
* We learned how to implement various UI components such as making an animated logo alongside simpler things such as using buttons in Unity.
## What's next for BeatParty
The tracking software can be further developed to be more accurate and respond faster to user movements. We plan to add an online multiplayer mode through our website ([beatparty.tech]). We also plan to use EchoAR to make better objects so that the user can interact with, (ex. The hitboxes or cosmetics). BeatParty is currently an android application and we have the intention to expand BeatParty to both IOS and Windows in the near future.
|
# Inspiration 💡
Every day, doctors across the globe use advanced expertise and decades of medical breakthroughs to diagnose patients and craft unique prescriptions. The inspiration for this mobile application stems from the irony that the result of such precision is without fail, the chicken scratch found on a doctor’s note. The physician not only entrusts that the patient will keep track of the crumpled paper but they require that the pharmacist on-call will understand their professional scribble. The plan is to create a platform that leverages technology to streamline the prescription filling process, making it easier for doctors to authenticate their work and for patients and pharmacists alike to be confident in their prescriptions.
# What it does 🚀
The mobile application is designed to streamline prescription filling for patients, physicians and pharmacists. It starts with the written doctor’s note which is scanned via the mobile app and transcribed in real-time, allowing for them to confirm the prescription, directly edit it or retake the scan. This allows physicians to authenticate the interpretation of their prescription before shipping it off to the pharmacy via a shared patient database. During registration, the patient volunteers personal information which populates the database and ensures that general questions such as age, address and insurance are only to be answered once. In conjunction with the transcribed prescription, this information will be used to fill any necessary pharmaceutical forms that are scanned via the app. With completed paperwork, a transcribed description and verified patient information, pharmacists are significantly less likely to make errors fulfilling patient orders.
# How we built it 🏗️
To produce this mobile application, we utilized a diverse technology stack to integrate various components and create an uninterrupted product experience. Using our defined user types (patient, physician and pharmacist), we derived the necessary functions for each, prioritizing and placing them to create an intuitive UI. This paved the way for early design wireframes and an eventual high-fidelity Figma prototype which directed our front-end development in ReactNative. On the back-end, the light-weight Python framework, Flask, was used to handle registration, data transferring and transcription. Our application required keeping track of a large amount of data, which we stored/accessed within a Redis Cloud database. In order to accurately interpret text from forms and notes provided by doctors/pharmacies, we utilised the Google Cloud Vision OCR API as well as Gemini Pro, providing users with accurate transcriptions within images that were processed with the Pillow API. Then, in order to autocomplete forms with efficiency and accuracy, we deployed the OpenAI API as an LLM model, generating prompts through information found within the Redis Cloud database to fill out forms with the correct answers. The back-end of our project was developed parallel to our front-end, which was entirely built using react-native, capable of supporting both android and IOS devices. By using the navigation library, the various components of the application are split into their own pages with styling and functions unique to each one.
## Form filling process:
* Read form
+ Extract text and location using Google Cloud OCR
+ Group text into coherent groups (Vertical + horizontal coordinate comparison and LLMs
+ Detect fillable fields and clean punctuation (LLM and Python)
* Answer fields (LLM)
* Write to form (Python Pillow library)
# Challenges we ran into 🧩
Throughout our project, we faced several obstacles which we ultimately overcame with perseverance and targeted learning. Before writing a single line of code, we discussed our interests, skillsets and project ambitions, finding clear differences which would necessitate a degree of compromise. Moreover, we decided to implement a tech stack with Flask on the backend and React-native on the frontend, which proved to be frustrating as they could have been much more complementary. As a result, we had a handful of avoidable complications given a better-composed tech stack. Given the shorter nature of this particular hackathon, we were very much forced to stick with the decisions we made early on without much room to pivot, greatly improving our critical thinking and debugging skills. Another issue we ran into was the inconsistency that we found within practices in the medical industry, as doctors notes and pharmaceutical forms often differed widely from one person/company to another, meaning we had to build scripts that would satisfy a wide variety of possibilities.
# Accomplishments that we're proud of 🏆
Ultimately, we are extremely proud that we were able to successfully build a full-stack mobile application within a 24-hour window, especially given that 3 of our members were not particularly experienced in app development before this hackathon. However, we were all able to find ways to contribute to the project whether it be through design elements, programming, or pitching. We are also extremely happy with the number of different libraries/APIs that we were able to put to use in this project. Having the experience of working with these APIs, many of which were for the first time, was extremely exciting for us and allowed us to build a product that we found to be extremely interesting.
# What we learned 🧠
Through this experience, all members of our group came away with an enhanced skillset. We each improved our developing expertise as despite our lack of experience with the chosen technologies and tools, we ultimately built a strong project that applied those tools well. Beyond just coding, we also improved our product thinking, prioritizing the needs of each end-user to design a well-rounded and truly valuable mobile application. Above all, we learned the importance of programming as a collaborative process, finding success by sharding responsibility between members and optimizing workloads with everyone's unique skill set. We are beyond excited to apply our updated mindset to participate in future hackathons.
# What's next for PharmFill 🚀
We are interested in continuing our work with PharmFill, with a focus on adding more features and polishing those already implemented. Our belief in PharmFill's potential to significantly impact the medical industry fuels our enthusiasm. A key area of development is the creation of more complex form-filling algorithms, capable of handling all types of forms. This advancement will not only enhance the app's functionality but also solidify its position as a transformative tool in healthcare.
|
## Inspiration
One pressing issue amongst the elderly population that modern medicine has not been able to properly diagnose and solve is dementia and Alzheimer's disease. These two cripping diseases affect the mind, and their symptoms are similar: a loss of memory.
We took it upon ourselves to challenge the status quo and invent new ways to help elders who are struggling with these diseases. We recognized that memories are intangible, you cannot physically touch them in any way, shape, or for, but it is generally acknowlegded that everyone has them. Not only that, but these memories tend to evoke feelings of happiness, sadness, anger, and many more. Memories are crucial to one's identity and we wanted to preserve these memories in a way that could be possible: generative AI.
## What it does
In essence, we want *you* to tell us the type of memory you're trying to describe. Are you thinking of a loved one, a lover, or something else? Is it an object?
Then we ask: how does it make you feel? Sad? Happy? You don't know? Any of these options are valid - describing memories and how you felt about it is difficult.
After that, we prompt you to expand on this memory. What else do you have to say about it? Did this take place at the beach? Was it about food? The possibilities are endless.
Once you've finished trying to recollect your memory, we use generative AI to recreate a scene that is *similar* to what you've described. We hope that by generating scenes that are similar to what you've experienced, that, anytime you come back to revisit the website. You can see the image and the caption under it, and remember what lead you there in the first place.
## How we built it
The app was built using Next.js, a React framework. In terms of storing data, we used Supabase, a Postgres database. Much of the heavy lifting was from utilizing OpenAI API and Replicate API to conversate and generate image.
## Challenges we ran into
The main bulk of our issues were actually trying to use OpenAI API and Replicate API. For some unknown reason to us, these APIs were rather difficult to implement in conjuction to the scope of what we were tryinig to accomplish.
The second part of our issues was implenting Supabase into Next.js. This also took a long time because we weren't too familiar with setting up Supabase into a React framework like Next.js. This was a lot of learning and trial and error to get it up and running.
## Accomplishments that we're proud of
We're very proud of the fact that the three of us were able to create an app from scratch with the majority of us being unfamiliar with the tech stack involved.
In addition to that, we're very proud of the fact that we were successfully able to integrate some of the state-of-the-art technologies like ChatGPT and Stable Diffusion in our project.
Lastly, we're proud of ourselves because this was all of our first time doing something like this, and to be able to create something usable within this short amount of time is amazing; I am proud of our team.
## What we learned
Overall, as a team, we learned each other's strengths and incorporated it into our own skillsets.
For some of us, we learned how to use the Next.js framework and Supabase.
For others, it was learning how to leverage APIs like OpenAI and Replicate into Next.js.
## What's next for Rememory
What's next?
Well... I forgot...
Jokes aside, we see Rememory having massive potential to keep Alzheimers and dementia at bay. We're constantly reminding people of the memories and experiences that they have.
We see Rememory having many more features than what it already has. We had an idea of incorporated of a digital avator chatbot to make the experience more interactive and enjoyable for elders. As a standalone project, you can view this as a journal as well, documenting memories to access them later on.
We keep track of the prompts that are put in by the users to remind them of what led them to this memory. We can leverage these prompts in patients that are Alzheimers' and model a trend of where patients started to forget these memories, as sad as it does sound.
|
partial
|
## Inspiration
✨ In a world that can sometimes feel overwhelmingly negative, we wanted to create an oasis of positivity. The inspiration for **harbor.ed** comes from the calming effect that the ocean and its inhabitants have on many people. As students, we often crave for the need to be cared for, and of course, habored. We envisioned a digital sanctuary where individuals could find comfort and companionship in the form of sea creatures, each with a unique personality designed to uplift and support. Especially to international students who may not have a chance to see their family for months at a time, it is often to feel lonely or sad with no one there to watch out for your feelings - Harbour.ed is your safe space:
## What it does
🌊 Harbor.ed is an interactive online space where individuals feeling down or distressed can find solace and encouragement. Users visit our website and choose from a variety of sea creatures to converse with. These friendly fishes engage in supportive dialogue, offering words of encouragement. Utilizing advanced emotion detection technology, our platform identifies when a user is particularly sad and prompts the sea creatures to provide extra comfort, ensuring a personalized and empathetic experience.
## How we built it
✨ Our project harnesses a diverse tech stack, as illustrated in the architecture diagram.
The client-side is supported by technologies like **React**, Sass, and JavaScript, ensuring a seamless and engaging user interface.
The server-side is bolstered by **Google Cloud** and **GoDaddy**, providing robust and scalable hosting solutions. We've leveraged **MongoDB Atlas** for our database needs, ensuring efficient data management.
The heart of Harbor.ed's empathetic interaction comes from the **OpenCV** and **AWS-powered** **emotion detection** and the innovative use of cohere APIs, which allow our sea creatures to respond intelligently to users' emotional states.
## Challenges we ran into
🌊 Integrating the emotion detection technology with real-time chat functionality posed a significant challenge. Ensuring user privacy while processing emotional data required careful planning and execution. Moreover, creating a diverse range of sea creature personalities that could appeal to different users was a complex task that demanded creativity and an understanding of psychological support principles. There were many things we ran into during both development and deployment, and being able to ship this project on time is a big accomplishment for us.
## Accomplishments that we're proud of
✨ We are particularly proud of creating an environment that not only recognizes emotions but responds in a comforting and supportive manner. Our success in integrating various technologies to create a seamless user experience is a testament to our team's dedication and technical prowess.
## What we learned
🌊 Throughout the development of harbor.ed, we learned the importance of interdisciplinary collaboration, combining elements of psychology, technology, and design. We also gained valuable insights into the technical aspects of real-time emotion detection and chatbot development. Many technologies like Langchain and OpenCV were also first time uses for some of our members, and seeing everything come together is extremely rewarding.
## What's next
✨ The future of harbor.ed is bright and bustling with potential. We plan to expand the range of sea creatures and personalities, improve our emotion detection algorithms for greater accuracy, and explore partnerships with mental health professionals to refine the support our digital creatures can provide. Our ultimate goal is to create a global community where everyone has access to a virtual sea of support.
|
## A bit about our thought process...
If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own.
That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to!
## What does it even do
It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:
**1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!
**2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms
## The Fun Part
Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics.
One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate!
There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created!
## What we learned
We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
|
## ✨ Inspiration
Quarantining is hard, and during the pandemic, symptoms of anxiety and depression are shown to be at their peak 😔[[source]](https://www.kff.org/coronavirus-covid-19/issue-brief/the-implications-of-covid-19-for-mental-health-and-substance-use/). To combat the negative effects of isolation and social anxiety [[source]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7306546/), we wanted to provide a platform for people to seek out others with similar interests. To reduce any friction between new users (who may experience anxiety or just be shy!), we developed an AI recommendation system that can suggest virtual, quarantine-safe activities, such as Spotify listening parties🎵, food delivery suggestions 🍔, or movie streaming 🎥 at the comfort of one’s own home.
## 🧐 What it Friendle?
Quarantining alone is hard😥. Choosing fun things to do together is even harder 😰.
After signing up for Friendle, users can create a deck showing their interests in food, games, movies, and music. Friendle matches similar users together and puts together some hangout ideas for those matched. 🤝💖 I
## 🧑💻 How we built Friendle?
To start off, our designer created a low-fidelity mockup in Figma to get a good sense of what the app would look like. We wanted it to have a friendly and inviting look to it, with simple actions as well. Our designer also created all the vector illustrations to give the app a cohesive appearance. Later on, our designer created a high-fidelity mockup for the front-end developer to follow.
The frontend was built using react native.

We split our backend tasks into two main parts: 1) API development for DB accesses and 3rd-party API support and 2) similarity computation, storage, and matchmaking. Both the APIs and the batch computation app use Firestore to persist data.
### ☁️ Google Cloud
For the API development, we used Google Cloud Platform Cloud Functions with the API Gateway to manage our APIs. The serverless architecture allows our service to automatically scale up to handle high load and scale down when there is little load to save costs. Our Cloud Functions run on Python 3, and access the Spotify, Yelp, and TMDB APIs for recommendation queries. We also have a NoSQL schema to store our users' data in Firebase.
### 🖥 Distributed Computer
The similarity computation and matching algorithm is powered by a node.js app which leverages the Distributed Computer for parallel computing. We encode the user's preferences and Meyers-Briggs type into a feature vector, then compare similarity using cosine similarity. The cosine similarity algorithm is a good candidate for parallelizing since each computation is independent of the results of others.
We experimented with different strategies to batch up our data prior to slicing & job creation to balance the trade-off between individual job compute speed and scheduling delays. By selecting a proper batch size, we were able to reduce our overall computation speed by around 70% (varies based on the status of the DC network, distribution scheduling, etc).
## 😢 Challenges we ran into
* We had to be flexible with modifying our API contracts as we discovered more about 3rd-party APIs and our front-end designs became more fleshed out.
* We spent a lot of time designing for features and scalability problems that we would not necessarily face in a Hackathon setting. We also faced some challenges with deploying our service to the cloud.
* Parallelizing load with DCP
## 🏆 Accomplishments that we're proud of
* Creating a platform where people can connect with one another, alleviating the stress of quarantine and social isolation
* Smooth and fluid UI with slick transitions
* Learning about and implementing a serverless back-end allowed for quick setup and iterating changes.
* Designing and Creating a functional REST API from scratch - You can make a POST request to our test endpoint (with your own interests) to get recommended quarantine activities anywhere, anytime 😊
e.g.
`curl -d '{"username":"turbo","location":"toronto,ca","mbti":"entp","music":["kpop"],"movies":["action"],"food":["sushi"]}' -H 'Content-Type: application/json' ' https://recgate-1g9rdgr6.uc.gateway.dev/rec'`
## 🚀 What we learned
* Balancing the trade-off between computational cost and scheduling delay for parallel computing can be a fun problem :)
* Moving server-based architecture (Flask) to Serverless in the cloud ☁
* How to design and deploy APIs and structure good schema for our developers and users
## ⏩ What's next for Friendle
* Make a web-app for desktop users 😎
* Improve matching algorithms and architecture
* Adding a messaging component to the app
|
partial
|
## Inspiration
We noticed two big problems in the medical field. The first is the annoying part of being a doctor - inputting data into Electronic Health Records (EHRs). Studies show that physicians spend two-thirds of their time on the job not interacting with patients, but just staring at computer screens. This can lead to physician demoralization and burnout. We wanted to change that.
We also looked at the data on following checklists. We learned that sticking to a pre-made order of tasks during a checkup leads to far fewer mistakes on the part of the physician and dramatically helps patients around the world. We wanted to increase physician accountability to these kinds of checklists.
Enter CheckHealth.
## What it does
CheckHealth acts as a digital assistant for doctors during patient visits. Our program is in the background of your general checkup, running unobtrusively on your physician's computer. It listens for key commands that correspond to observations the doctor is making - e.g. pulse and blood pressure. If a doctor misses a step, CheckHealth asks whether he/she would like to cover the missing steps. It then takes all of the relevant information it's collecting and compiles it into a format easily integrated into all of the most common EHR systems. No more time wasted staring at computer screens for doctors! And no more wondering if patients are receiving comprehensive care! CheckHealth handles it all.
## How we built it
We used Houndify API to handle the speech-to-text and a lot of the command parsing, which forms the core of our functionality. We also used a Python backend to record audio, take in relevant patient information, and output a .csv file to be used by any primary healthcare provider EHRs. The end deliverable is a terminal-level Python program that assists physicians during general checkups.
## Challenges we ran into
Houndify API definitely had a learning curve and we struggled with sifting through the documentation and figuring out how the specifications would fit with our vision. We also considered writing to an open-source EHR, but because of the technical complexity along with the ultimate lack of ultimate interoperability, we decided against it.
## Accomplishments that we're proud of
We're really, really happy that we got Houndify API working and our core speech-to-text functionality up and running. We also love that we were able to create a .csv file that basically acts as an activate temporary patient record, which allowed our system to have long-term data persistence.
## What we learned
Of course, we acquired lots of technical skills; half of our team has never taken a formal CS class! We learned key skills in project management and delegation. But, most importantly, we learned that we're much stronger together than we are alone.
## What's next for CheckHealth?
We want to integrate with Redox, a web application that shuttles patient information between EHR systems. Becoming a Redox Node means integrability with the vast number of healthcare databases in the Redox system. We also want to see if we can activate our speech-to-text commands sonically rather than manually, so we can make CheckHealth even more frictionless than it already is. We're also considering building out more functionality and improving UX.
|
## Inspiration
Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel.
## What it does
What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app.
## How we built it
We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning).
## Challenges we ran into
The challenges we ran into are the following:
1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field.
2) Deploying using Azure
3) Having to use Vanilla React Native installation
## Accomplishments that we're proud of
We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term.
## What we learned
We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist.
## What's next for eVital
Our next steps are the following:
1) We want to be able to implement real-time updates for both doctors and patients.
2) We want to be able to integrate machine learning into the app for automated medical alerts.
3) Add more data visualization and data analytics.
4) Adding a functional log-in
5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc)
6) We want to put push notifications for patients' tasks for better monitoring.
|
### 💡 Inspiration 💡
We call them heroes, **but the support we give them is equal to the one of a slave.**
Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record.
Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.**
### ❓ What it does ❓
My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software.
### 🏗️ How we built it 🏗️
The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend.
### 🚧 Challenges we ran into 🚧
We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges.
### ✅ Accomplishments that we're proud of ✅
The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done.
### 🙋♂️ What we learned 🙋♂️
We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic.
### 💭 What's next for My MedicAid 💭
We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient.
|
partial
|
## Inspiration
As University of Waterloo students who are constantly moving in and out of many locations, as well as constantly changing roommates, there are many times when we discovered friction or difficulty in communicating with each other to get stuff done around the house.
## What it does
Our platform allows roommates to quickly schedule and assign chores, as well as provide a messageboard for common things.
## How we built it
Our solution is built on ruby-on-rails, meant to be a quick simple solution.
## Challenges we ran into
The time constraint made it hard to develop all the features we wanted, so we had to reduce scope on many sections and provide a limited feature-set.
## Accomplishments that we're proud of
We thought that we did a great job on the design, delivering a modern and clean look.
## What we learned
Prioritize features beforehand, and stick to features that would be useful to as many people as possible. So, instead of overloading features that may not be that useful, we should focus on delivering the core features and make them as easy as possible.
## What's next for LiveTogether
Finish the features we set out to accomplish, and finish theming the pages that we did not have time to concentrate on. We will be using LiveTogether with our roommates, and are hoping to get some real use out of it!
|
## Inspiration
College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user.
Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health.
## What it does
Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline.
The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity.
The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress.
## How we built it
The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python.
## Challenges we ran into
The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here.
## Accomplishments that we're proud of
We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack.
## What we learned
We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space.
## What's next for Clockwork
Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
|
## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens.
|
winning
|
## Inspiration
I enjoy Choose Your Own Adventure games and I thought it would be a fun memory to re-create key moments of the hackathon! This is all done for fun. It's very simple.
## What it does
You choose your adventure at TreeHacks 2023 by making a set of choices and getting outcomes based on your answers.
## How we built it
Figma
## Challenges we ran into
Ran out of time. Lack of sleep. I wasn't sure what project I should do so I spent a lot of time over the weekend thinking of ideas. Ultimately, I decided on this project. While it is quite short - I do enjoy the message of the game and I hope people enjoy it. I went to many workshops and fun activities over the weekend!
## Accomplishments that we're proud of
Submitting a project as a beginner!
## What we learned
Figma Prototyping!
## What's next for TreeHacks 2023: Choose Your Own Adventure Game - Figma
Perhaps this adventure game will come to you at another hackathon!
|
## 🤯 Inspiration
As busy and broke college students, we’re usually missing semi-essential items. Most of us just suffer a little and just go without, but what if there was an alternative? Say you need a vacuum. More often than not, someone living in your hall has one they aren’t opposed to sharing! Building upon this principle, our app aims to **connect** “haves” with “have-nots” and create a closer community along the way.
## 🧐 What it does
Our app provides an easy-use platform for students to share favors between each other; two clear use-cases are borrowing items and running convenience store errands. In addition, this application encourages tighter communities and helps reduce consumerist waste(not everyone in a dorm hall needs their own of everything!).
## 🥸 How we built it
* **Frontend**: built in React Native with Expo, run on Xcode simulator
* **Backend** : authentication with Firebase, Typescript, TypeORM, GraphQL used to power Node server with Apollo editor to communicate with CockroachDB.
* **Design and UI**: Figma and Google Slide
* **Pitching** : Loom and Adobe Premiere
## 😅 Challenges we ran into
* We were unable to find a UI/UX designer for our team and initially struggled with getting the project off the ground. Heather dedicated most of her time filling that role by learning how to operate Figma and tried her very best to make an aesthetically pretty mock-up and final pitch.
* It was also difficult to work through many time zones and keep track of all members; we lost a backend person in last minute so Hung stepped up to the challenge to learn GraphQL, CockroachDB, and TypeORM in a really short time.
* Of course scope
## 😊 Accomplishments that we're proud of
* Heather is super proud of surviving her first hackathon and having her idea finally somewhat come to life! She also now realizes how much there is left to learn and is excited to explore more into UI/UX design and what goes into developing a mobile app.
* Hung somehow managed to implement React Native App with expo, GraphQL & Node server in less than 24 hours
## 🤔 What we learned
* We learned that having a reliable designer is super important, and how time moves super fast when you are having fun!
* Having a high bar is good but also terrifying :^(
## 😤 What's next for Favor App
We built a relatively functional minimum featured project over the past two days; however, we would like to implement GPS reliability and optimization algorithms in order to increase the amount of favors completed and make fulfilling favors easier. The ultimate goal is to tailor favor requests so fulfilling them doesn’t deviate from the helpers’ normal daily routines. We would also like to include more game-like features and other incentives. We could see ourselves using and relying on something like this a lot, so this hackathon will hopefully not be the end!
|
## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on.
|
losing
|
## Inspiration
One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track.
## What it does
Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community!
## How we built it
React front-end, MongoDB, Express REST server
## Challenges we ran into
Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics.
## Completion
In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics.
## What we learned
Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch!
## What's next for IDNI - I Don't Need It!
We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
|
## Inspiration
After learning about the current shortcomings of disaster response platforms, we wanted to build a modernized emergency services system to assist relief organizations and local governments in responding faster and appropriately.
## What it does
safeFront is a cross between next-generation 911 and disaster response management. Our primary users are local governments and relief organizations. The safeFront platform provides organizations and governments with the crucial information that is required for response, relief, and recovery by organizing and leveraging incoming disaster related data.
## How we built it
safeFront was built using React for the web dashboard and a Flask service housing the image classification and natural language processing models to process the incoming mobile data.
## Challenges we ran into
Ranking urgency of natural disasters and their severity by reconciling image recognition, language processing, and sentiment analysis on mobile data and reported it through a web dashboard. Most of the team didn't have a firm grasp on React components, so building the site was how we learned React.
## Accomplishments that we're proud of
Built a full stack web application and a functioning prototype from scratch.
## What we learned
Stepping outside of our comfort zone is, by nature, uncomfortable. However, we learned that we grow the most when we cross that line.
## What's next for SafeFront
We'd like to expand our platform for medical data, local transportation delays, local river level changes, and many more ideas. We were able to build a fraction of our ideas this weekend, but we hope to build additional features in the future.
|
## Inspiration
As university students, we are often buying textbooks and other items online. However, we found that the site that we bought the majority of our items, Amazon, didn't always have the lowest prices. So we wanted to create an chrome extension to help users find great deals to save money.
## What it does
Bandit is a Chrome extension that allows Amazon customers to easily compare product price that retailers are selling for vs what users are reselling for on eBay. It does this by displaying a dropdown showing the price of the item in the extension. For more information, the user can click the extension and it will bring them to the eBay page.
## How we built it
Used react for all the front-end components (displaying items)
Selenium, Node.js, Express, RapidAPI (back-end/scraping)
## Challenges we ran into
At first we had troubles figuring out how we would scrape contents from Amazon and integrate it with eBay API but later solved it.
Our main issue currently is working on the runtime of loading in eBay API into our client to display to users. It is currently not loading on the page as fast as we like but are actively working towards a solution.
## Accomplishments that we're proud of
We were given a task with a short deadline and were able learn/implement new tools to create a functioning chrome extension which met the users need.
## What we learned
Throughout the project, we learned how to work collaboratively as a team. This was essential in the success of the project as we helped each other in our tasks to ensure they would be completed before the short deadline. Furthermore, we developed our project management skills as we worked to complete the project. We were able to do this by managing the tasks to be completed and divided up the roles in order to be efficient given our timeframe.
## What's next for Bandit.
We have a few features that we would like to add to Bandit in the future to improve the users experience. These features would be primarily based around accessibility, to be inclusive for all users. To highlight, we have chosen three that we believe are most important. The first feature would be a help button in our extension that would provide the user with information about how to use Bandit. Next, we will add a font size slider to allow the user to adjust the text size based on their needs, this addition would help those who may have difficulty reading. Finally, we will add a toggle for light and dark mode, which allows the user to switch the visuals of the extension based on their preferences.
|
winning
|
# CouchCampaign: AI-Driven D&D-like Couch Multiplayer 🛋️🎲
## Inspiration 💡
We wanted to bring the magic of tabletop RPGs into the digital age, making it accessible to anyone with a smartphone and infusing it with cutting-edge AI technology.
## What it does 🌟
CouchCampaign is an AI-driven multiplayer game that combines the depth of Dungeons & Dragons with the convenience of mobile gaming. Players create characters and interact with an AI Dungeon Master that dynamically generates the story, NPCs, and world. The game features AI-generated maps, real-time character stat updates, and emotion analysis that influences NPC interactions and the narrative.
## How we built it 🛠️
```
Frontend: React (mobile-responsive design)
Backend: Python with FastAPI
Game Engine: Unity
AI Dungeon Master: OpenAI's GPT-4 with access to a D&D database
Map Generation: Custom diffusion model + classification system
APIs: Hume (emotion analysis during NPC interactions), Meshy (text-to-3D models)
```
## Challenges we ran into 🚧
```
Integrating multiple AI services and connecting them with Unity
Getting Unity to understand the maps we created
Debugging because there are three systems that are all dependent on each other
```
## Accomplishments that we're proud of 🏆
```
Creating an AI Dungeon Master with access to comprehensive D&D knowledge
Implementing a sophisticated map generation system using diffusion models
Integrating real-time emotion analysis to influence NPC interactions
Combining three different stacks all together
```
## What we learned 📚
```
Advanced AI integration in gaming applications
Diffusion models for image generation
GPT function calling
```
## What's next for CouchCampaign 🚀
```
Implementing quests, dungeons, and towns to expand gameplay
Enhancing the AI's utilization of the D&D database for more complex scenarios
Expanding multiplayer features for collaborative adventures
Further optimizing performance and user experience on mobile devices
```
|
## LoreKraft: The Future of MMORPGs
---
### **Motivation**:
What happens when broke grad students, armed with a love for AI and late-night RPG marathons, dream big? You get **LoreKraft**, an AI-driven MMORPG engine with a twist—AI Dungeon Masters orchestrating vast and dynamic worlds. We were inspired by the idea of replacing the conventional Dungeon Master with an AI expert who could generate epic adventures on the fly. With the rise of **Generative AI**, transformers, and multi-agent systems, we saw an opportunity to revolutionize RPG gaming into something more immersive, smarter, and more unpredictable—just like the real world of adventuring!
We didn’t just want an RPG; we wanted an engine where **multiple AI agents** collaborate, much like a council of wise wizards, to create infinite storylines. The idea of multi-agent systems intrigued us—AI as a Dungeon Master that knows the lore, tracks player stats, and even conjures up epic narratives in real-time!
---
### **What We Learned**:
Berkeley’s hackathon culture taught us one thing: why spend your weekend snacking when you could be hacking? We plunged into the depths of multi-agent systems and learned the true magic of AI-driven collaboration. It’s one thing to have a chatbot, but getting **multiple AI agents** to work together harmoniously? That’s an entirely different game!
We learned how cutting-edge models like **Gemini** can be leveraged for creative text generation, while **GPT-4 function calls** take care of table queries and stats tracking. Beyond the coding, we dove into the intricacies of game mechanics, narrative pacing, and how to maintain an engaging multiplayer experience, all orchestrated through intelligent agents.
---
### **How We Built It**:
LoreKraft’s foundation lies in a **multi-agent system** where each agent plays a distinct role in the game’s ecosystem. Here's the technical breakdown:
* **Creative Text Generation**: We utilized **Gemini AI** to generate dynamic, immersive narratives, giving life to the AI Dungeon Master that never gets tired of spinning epic tales.
* **GPT Function Calls**: For database queries and knowledge retrieval, we relied on **GPT’s function calling** capabilities to fetch player stats and interact with the game world seamlessly.
* **Retrieval-Augmented Generation (RAG)**: We incorporated **RAG models** to retrieve knowledge from the database, ensuring that player attributes, inventory, and past actions were always at the AI's fingertips.
* **Union of Experts (Agent-Based Collaboration)**: Each AI agent had a specific task—whether it was map generation, combat event creation, or managing delayed trigger events. These agents operated like a team of expert Dungeon Masters, constantly collaborating to build a robust game engine that responds dynamically to player input.
* **Frontend with Reflex AI**: On the frontend, we implemented **Reflex AI** to create a seamless, interactive interface. The dynamic game board was rendered based on the AI’s decisions in real-time, providing instant feedback to the players.
* **Node.js for Session Management**: We utilized **Node.js** to handle player sessions, allowing for multiplayer interaction and saving the state of each player’s game.
* **Backend with Flask**: For the backend, **Flask** was our framework of choice, ensuring smooth communication between our AI agents and the player interface.
* **Database**: We employed a hybrid system—**SingleStoreDB** for fast retrieval and analytics of game data, and **MongoDB** to manage dynamic, unstructured data like character traits and lore information.
* **Snap Spectacles for Immersive Experience**: To take things up a notch, we tried integrating **Snap Spectacles** to allow players to experience the game world in augmented reality, where AI could dynamically alter the environment around them, blending the virtual with the real.
---
### **Challenges We Faced**:
What’s a hackathon without some technical dragons to slay? Here are a few:
* **Multi-Agent Orchestration**: Managing multiple AI agents to work in harmony presented synchronization issues. Making sure all agents were on the same page without overwhelming the system took some delicate balancing.
* **Data Optimization**: With so much data being passed between AI agents and the database, we faced challenges with **optimizing data retrieval** and storage. We worked hard to ensure fast queries using hybrid database solutions.
* **Unstable Beta Products**: We tried pushing the limits with beta AI tools and platforms, but sometimes they weren’t quite ready for production-level use. While we planned some groundbreaking features, a few had to be scaled back due to **instability in beta models**.
* **Session Handling at Scale**: Handling multiple players while maintaining persistent sessions and ensuring smooth transitions between game states required some significant optimization work on the **Node.js** side.
---
### **Pitch Idea**:
For the presentation, we want to **generate the entire pitch live** using the same AI-driven game engine we've built! Our **Dungeon Master AI** will craft the narrative of the project as we demo, bringing the technical elements to life through creative storytelling. The agents will work together to present how they built LoreKraft, while seamlessly transitioning between technical explanations, player interactions, and visual frames—giving the judges a real sense of the power of **AI collaboration**.
---
**Final Thoughts**:
LoreKraft is more than just a game engine—it's a platform that could revolutionize MMORPGs by utilizing **multi-agent systems**. Imagine a world where multiple AI agents act like experts, building, managing, and constantly evolving a game world tailored to each player's decisions. This kind of intelligent orchestration can bring depth and immersion to games, unlike anything seen before. We’re not just building a game; we’re building **a future where AI and human creativity unite to craft limitless adventures**.
|
## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless
|
losing
|
**<https://docs.google.com/presentation/d/1aO6ONwEJVDaSg9cy-ssolsEGUhKRGXzqOEb2qsDdBlE/edit?usp=sharing>**
Our project is the **Smart Parking Management System (SPMS)**, an app designed to improve communication between **Parking Authorities** (such as local governments and traffic police) and **parkers**. The app enables authorities to log their smart parking meters, allowing parkers to track real-time parking spot availability. Key features include **reporting bad parking spots**, **dynamic pricing** based on demand, **data collection and analytics**, **valet security**, and **car mapping**. **SPMS** aims to enhance parking efficiency and provide a better experience for both parkers and authorities.
**In short, it's somewhat like Airbnb, but for private parking spaces. We provide a platform where users can see and book available private parking spots, and we earn a commission as a middleman.**
## **Inspiration**
Urban parking remains a persistent challenge in many cities, contributing to traffic congestion, increased emissions, and driver frustration. We were inspired by the idea of harnessing smart technologies to create a more efficient and user-friendly parking experience. **The Smart Parking Management System (SPMS)** was developed with the goal of bridging the communication gap between parking authorities and parkers, while utilizing data-driven insights to optimize parking infrastructure.
## **What It Does**
**SPMS** is designed as a comprehensive app that improves the interaction between **Parking Authorities**, such as local governments, traffic police, and parking enforcement, and **parkers**, who are drivers searching for available spots. The app features **real-time tracking** of parking availability through logged smart parking meters, allowing drivers to quickly find and occupy open spots. Key features include **dynamic pricing** based on real-time demand, a **reporting function** for drivers to flag blocked or misused spots, **valet security** for vehicle safety, and **data collection and analytics** for parking authorities to monitor usage patterns and make informed decisions. By enabling a **two-way communication system**, **SPMS** not only enhances parking efficiency but also aims to reduce traffic congestion and provide a better overall experience for both parkers and authorities.
## **How We Built It**
Our team used a **multi-layered tech stack** to build **SPMS**. For backend processes, we implemented **Flask**, utilizing Python’s **Streamlit module** for handling requests and managing data. On the frontend, we employed **HTML** and **CSS** to create a clean and user-friendly interface. Real-time updates are a core feature of **SPMS**, making the seamless integration of front-end and back-end processes essential. The app has been developed with **scalability** in mind, allowing for deployment on **cloud platforms** such as Heroku or Vercel. This architecture not only enhances the app’s stability and responsiveness but also provides the flexibility to expand features in the future.
## **Challenges We Ran Into**
Building **SPMS** presented multiple challenges, primarily around balancing the needs of different user groups. Developing an intuitive yet feature-rich interface was critical to ensuring that **parkers** could easily find parking while authorities could effectively manage parking spaces. Implementing **dynamic pricing models** required real-time data processing and sophisticated algorithms, which had to be both accurate and resource-efficient. Another challenge was enabling **two-way communication** between drivers and authorities while ensuring that reports and responses were handled promptly.
## **Accomplishments That We’re Proud Of**
We’re particularly proud of successfully creating an **integrated platform** that connects both sides of the parking management equation. The implementation of **dynamic pricing** allows parking authorities to optimize space utilization based on real-time demand, while the **reporting feature** provides immediate feedback to address parking issues. By utilizing **real-time data and analytics**, **SPMS** empowers cities to make smarter decisions and offers drivers a streamlined parking experience. Additionally, the **scalability** of the platform allows for future integrations and enhancements.
## **What We Learned**
Throughout the development of **SPMS**, we gained valuable insights into the complexities of urban infrastructure and the challenges of optimizing limited resources. We deepened our understanding of **data-driven decision-making** and learned how to efficiently process and present real-time information to multiple user groups. Working with **cloud-based deployment** and **two-way communication systems** was both a challenge and an opportunity for growth, enhancing our skills in building scalable and adaptable applications.
## **What’s Next for SPMS**
Looking ahead, we plan to expand **SPMS** with additional features, including **machine learning algorithms** for predictive analytics to further optimize parking management. We aim to develop **native mobile applications** for an enhanced user experience and integrate **secure payment gateways** to streamline transactions. Future plans also include adding support for **ride-sharing services**, **electric vehicle charging stations**, and **accessibility features** to broaden the impact of **SPMS**. Our ultimate vision is to create a **comprehensive smart parking solution** that contributes to more efficient urban mobility and a better quality of life for city residents.
|
## Inspiration
Finding parking spaces during rush hour can be frustrating. You're often just driving from lane to lane looking for an empty space. This inspired us to build ParkinGrid - a solution to this everyday problem!
## What it does
ParkinGrid shows available parking spaces and where they are located so we don't have to wander endlessly! Our product also supports buying parking space tickets for lots that require it!
## How we built it
We used the Django framework for Python to create the REST API endpoints, and OpenCV for image processing of the parking lots.
## Challenges we ran into
We ran into issues merging the image processing sections to other REST API endpoint.
## Accomplishments that we're proud of
Our team learned a lot of new technologies this hackathon, and we're proud to have built a product with unfamiliar technologies!
## What we learned
We all learned some new technology, whether that is working with Django or OpenCV, it was a great learning experience for all of us!
## What's next for ParkinGrid
We hope to make our endpoints more integrated with the image processing part of the code, and to add new features to make it a one-stop platform for parking spaces. Potential ideas for the future include: showing available street parking, and renting out your own parking spaces.
|
## Inspiration
Since the breakout of the pandemic, we saw a surge in people’s need for an affordable, convenient, and environmentally friendly way of transportation. In particular, the main pain points in this area include taking public transportation is risky due to the pandemic, it’s strenuous to ride a bike for long-distance commuting, increasing traffic congestion, etc.
In the post-covid time, private and renewable energy transportation will be a huge market. Compared with the cutthroat competition in the EV industry, the eBike market has been ignored to some extent, so it is the competition is not as overwhelming and the market opportunity and potential are extremely promising.
At the moment, 95% of the bikes are exported from China, and they can not provide prompt aftersales service. The next step of our idea is to integrate resources to build an efficient service system for the current Chinese exporters.
We also see great progress and a promising future for carbon credit projects and decarbonization. This is what we are trying to integrate into our APP to track people’s carbon footprint and translate it into carbon credit to encourage people to make contributions to decarbonization.
## What it does
We are building an aftersales service system to integrate the existing resources such as manufacturers in China and more than 7000 brick and mortar shops in the US.
Unique value proposition: We have a strong supply chain management ability because most of the suppliers are from China and we have a close relationship with them, in the meantime, we are about to build an assembly line in the US to provide better service to the customers. Moreover, we are working on a system to integrate cyclists and carbon emissions, this unique model can make the rides more meaningful and intriguing.
## How we built it
The ecosystem will be built for various platforms and devices. The platform will include both Android and iOS apps because both operating systems have nearly equal percentages of users in the United States.
Google Cloud Maps API:
We'll be using Google Cloud Map API for receiving map location requests continuously and plot a path map accordingly. There will be metadata requests having direction, compass degrees, acceleration, speed, and height above sea level at every API request. These data features will be used to calculate reward points.
Detecting Mock Locations:
The above features can also be mapped for checking irregularities in the data received.
For instance, if a customer tries to trick the system to gain undue favors, these data features can be used to see if the location request data received is sent by a mock location app or a real one.
For example, a mock location app won't be able to give out varying directions. Moreover, the acceleration calculated by map request can be verified against the accelerometer sensor's values.
Fraud Prevention using Machine Learning:
Our app will be able to prevent various levels of fraud by cross-referencing different users and by using Machine Learning models of usage patterns. Such patterns which will be deviant from normal usage behavior will be evident and marked.
Trusted Platform Execution:
The app will be inherently secure as we will leverage the SDK APIs of phone platforms to check the integrity level of devices. It’ll be at the security level of banking apps using advanced program isolation techniques and cryptography to secure our app from other escalated processes. Our app won't work on rooted Android phones or jail-broken iPhones
## Challenges we ran into
How to precisely calculate the conversion from Mileage to Carbon Credits, currently we are using our own way to convert these numbers, but in the future when we have a huge enough customers base and want to work on the individual carbon credits trading, this conversion calculation would be meticulous.
During this week, a challenge we had was to time difference among the teammates. Our IT brain is in China so it was quite challenging for us to properly and fully communicate and make sure the information flow well within the team during such a short time.
## Accomplishments that we're proud of
We are the only company that combines micro mobility with climate change, as well as use this way to protect the forest.
## What we learned
We have talked to many existing and potential customers and learned a lot about their behavior patterns, preferences, social media exposure and comments on the eBike products.
We have learned a lot regarding APP design, product design, business development, and business model innovation through a lot of trial and error.
We have also learned how important partnership and relationships are and we have learned to invest a lot of time and resources into cultivating this.
Above up, we learned how fun hackathons can be!
## What's next for Meego Inc
Right now we have already built up the supply chain for eBikes and the next step of our idea is to integrate resources to build an efficient service system for the current Chinese exporters.
|
losing
|
## Inspiration
An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin.
## What it does
The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper
We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown
The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin
## How we built it\
Using Recyclable Cardboard, used dc motors, and 3d printed parts.
## Challenges we ran into
We had to train our Model for the ground up, even getting all the data
## Accomplishments that we're proud of
We managed to get the whole infrastructure build and all the motor and sensors working.
## What we learned
How to create and train model, 3d print gears, use sensors
## What's next for Waste Wizard
A Smart bin able to sort the 7 types of plastic
|
## 💡 Inspiration 💯
Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you?
## ♻️ What it does 🌱
IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website.
## 🛠️ How we built it 💬
Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items.
Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords.
Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into.
## 🚧 Challenges we ran into ⛔
* Connecting the Arduino to the arms
* Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees
* Using global variables on our website
* Configuring MongoDB to store user data
* Figuring out how and when to detect the type of trash on the screen
## 🎉 Accomplishments that we're proud of 🏆
In a short span of 24 hours, we are proud to:
* Successfully engineer and program a servo arm to sort trash into two separate bins
* Connect and program LED lights that change colors varying on recyclable or non-recyclable trash
* Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not
* Develop an intuitive website with React.js that includes login, user profile, and informative capabilities
* Drink a total of 9 cans of Monsters combined (the cans were recycled)
## 🧠 What we learned 🤓
* How to program in C++
* How to control servo arms at certain degrees with an Arduino
* How to parse and understand Google Cloud Vision API outputs
* How to connect a MongoDB database to create user authentification
* How to use global state variables in Node.js and React.js
* What types of items are recyclable
## 🌳 Importance of Recycling 🍀
* Conserves natural resources by reusing materials
* Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions
* Reduces the amount of waste sent to landfills,
* Decreasesdisruption to ecosystems and habitats
## 👍How Intellibin helps 👌
**Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams.
**Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates.
**User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts.
**In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology.
## 🔮 What's next for Intellibin⏭️
The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics!
|
## 💡Inspiration
In a time of an emergency when a patient is unconscious, how can we be so sure that the first aid responders know what painkillers they can give, what condition they might've fallen ill to, and what their medical history is? Many years ago, there was an incident where a patient was allergic to the narcotics that were given to them. We want to change that and create a new standard when it comes to a new place for health records.
Patients are also more entitled to more privacy as blockchain is more secure and safe as possible. With new and more secure technologies, people can use blockchain to be confident that their information is immutable and protected with encryption and a private key that only their wearable/device has.
We give healthcare professionals access to data for their personal healthcare information ONLY when the patient has fallen ill.
## 🔍What it does
Emergenchain provides three primary uses. Firstly, we offer a secure and organized way to store one's medical records. This provides a convenient way for doctors to access a patient's medical history. Additionally, we also offer a certificate for vaccines and immunizations. This way people have an easy way to access their proof of vaccination for pandemics and other necessary immunizations. Furthermore, we offer an emergency summary sheet compiled from the information on their patient's medical history. This includes known health conditions and their risk. Finally, we have a QR code that displays the emergency information tab when scanned. This acts as a precaution for when someone is found unconscious, as first aid responders/medics can scan their QR code and immediately find details about the patient's health conditions, history, emergency contact information, and treatment methods.
## ⚙️How we built it
We designed our front end using Figma and coded it on React. For our navbar, we used react-router-dom, and for styling, we used Tailwind CSS to decorate our elements and used framer motion to add animation. All of the medical records are stored DeSo blockchain as posts, and all of our certificates are NFTs minted with DeSo. We also used DeSo login, to implement our login. We minted NFTs on Georli testnet and can be seen using the contract <https://goerli.etherscan.io/address/0x7D157EFe11FadC50ef28A509b6958F7320A6E6f9#writeContract> all on DeSo.
## 🚧Challenges we ran into
Throughout Hack the Valley, we ran into various challenges. Our biggest challenge is definitely that we did not have a programmer proficient with the back end in our team. This was a huge challenge, as we had to learn back-end programming from the basics. Additionally, this was our first time straying from liveshare, and using GitHub to its fullest. It was a challenge to learn to coordinate with each other through branches, pull requests, and code issues. Finally, we are proud to say that we have successfully coded a fully functional project in under 36 hours!
## ✔️Accomplishments that we're proud of
We are proud of surmounting the seemingly endless streams of obstacles. Specifically, learning the fundamentals of back-end programming, utilizing it in a real-world project, and learning how to integrate it with our front end. Furthermore, we are proud to have successfully coordinated our project with each other through Github, in a more organized fashion than liveshare, with properly documented source control. Finally, we are proud of ourselves for creating a fully functional program that tackles a severe issue our world faces, changing the world step by line!
## 📚What we learned
We learned many things about the fundamentals of back-end programming such as POST and GET requests, as well as interpreting and implementing algorithms through their documentation. Furthermore, we learned a lot about the DeSo Protocol library from posting records onto the blockchain, to minting NFTs, to implementing a Login system. Additionally, we learned many new features regarding Github. Specifically, how to collaborate with each other by utilizing many tools including branches, pull requests, merges, code reviews, and many more!
## 🔭What's next for Health Hub
We want to be the company that revolutionizes the world by storm and creates a new mass adoption through healthcare data. We believe that blockchain and crypto could be used to revolutionize the healthcare industry and not only create an actual handheld device but also partner with the government to have ambulances and first-aid responders check our chip or code if anyone falls unconscious to see if they have any healthcare data on them.
## 🌐 Best Domain Name from Domain.com
As a part of our project, we registered callamed.tech using Domain.com!
|
winning
|
## Inspiration
Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language!
## What it does
TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you!
## How we built it
We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes.
## Challenges we ran into
One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source.
## Accomplishments that we're proud of
Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible!
## What we learned
We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library.
## What's next for TranslatAR
We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
|
## Inspiration
Digitized conversations have given the hearing impaired and other persons with disabilities the ability to better communicate with others despite the barriers in place as a result of their disabilities. Through our app, we hope to build towards a solution where we can extend the effects of technological development to aid those with hearing disabilities to communicate with others in real life.
## What it does
Co:herent is (currently) a webapp which allows the hearing impaired to streamline their conversations with others by providing them sentence or phrase suggestions given the context of a conversation. We use Co:here's NLP text generation API in order to achieve this and in order to provide more accurate results we give the API context from the conversation as well as using prompt engineering in order to better tune the model. The other (non hearing impaired) person is able to communicate with the webapp naturally through speech-to-text inputs and text-to-speech functionalities are put in place in order to better facilitate the flow of the conversation.
## How we built it
We built the entire app using Next.js with the Co:here API, React Speech Recognition API, and React Speech Kit API.
## Challenges we ran into
* Coming up with an idea
* Learning Next.js as we go as this is all of our first time using it
* Calling APIs are difficult without a backend through a server side rendered framework such as Next.js
* Coordinating and designating tasks in order to be efficient and minimize code conflicts
* .env and SSR compatibility issues
## Accomplishments that we're proud of
Creating a fully functional app without cutting corners or deviating from the original plan despite various minor setbacks.
## What we learned
We were able to learn a lot about Next.js as well as the various APIs through our first time using them.
## What's next for Co:herent
* Better tuning the NLP model to generate better/more personal responses as well as storing and maintaining more information on the user through user profiles and database integrations
* Better tuning of the TTS, as well as giving users the choice to select from a menu of possible voices
* Possibility of alternative forms of input for those who may be physically impaired (such as in cases of cerebral palsy)
* Mobile support
* Better UI
|
## Inspiration
Have you ever had to stand in line and tediously fill out your information for contact tracing at your favorite local restaurant? Have you ever asked yourself what's the point of traffic jams at restaurants which rather than reducing the risk of contributing to the spreading of the outbreak ends up increasing social contact and germ propagation? If yes, JamFree is for you!
## What it does
JamFree is a web application that supports small businesses and restaurants during the pandemic by completely automating contact tracing in order to minimize physical exposure and eliminate the possibility of human error in the event where tracing back on customer visits is necessary. This application helps support local restaurants and small businesses by alleviating the pressure and negative impact this pandemic has had on their business.
In order to accomplish this goal, here's how it would be used:
1. Customer creates an account by filling out the required information restaurants would use for contact tracing such as name, email, and phone number.
2. A QR code is generated by our application
3. Restaurants also create a JamFree account with the possibility of integrating with their favorite POS software
4. Upon arrival at their favorite restaurant, the restaurant staff would scan the customer's QR code from our application
5. Customer visit has now been recorded on the restaurant's POS as well as JamFree's records
## How we built it
We divided the project into two main components; the front-end with react components to make things interactive while the back-end used Express to create a REST API that interacts with a cockroach database. The whole project was deployed using amazon-web services (serverless servers for a quick and efficient deployment).
## Challenges we ran into
We had to figure out how to complete the integration of QR codes for the first time, how to integrate our application with third-party software such as Square or Shopify (OAuth), and how to level out the playing field with the adaptability of new technologies and different languages used across the team.
## Accomplishments that we're proud of
We successfully and simply integrated or app with POS software (e.g. using a free Square Account and Square APIs in order to access the customer base of restaurants while keeping everything centralized and easily accessible).
## What we learned
We became familiar with OAuth 2.0 Protocols, React, and Node. Half of our team was compromised of first-time hackers who had to quickly become familiar with the technologies we used. We learnt that coding can be a pain in the behind but it is well worth it in the end! Teamwork makes the dream work ;)
## What's next for JamFree
We are planning to improve and expand on our services in order to provide them to local restaurants. We will start by integrating it into one of our teammate's family-owned restaurant as well as pitch it to our local parishes to make things safer and easier. We are looking into integrating geofencing in the future in order to provide targeted advertisements and better support our clients in this difficult time for small businesses.
|
winning
|
## Inspiration 💡
Buying, selling, and trading physical collectibles can be a rather tedious task, and this has become even more apparent with the recent surge of NFTs (Non-Fungible Tokens).
The global market for physical collectibles was estimated to be worth $372 billion in 2020. People have an innate inclination to collect, driving the acquisition of items such as art, games, sports memorabilia, toys, and more. However, considering the world's rapid shift towards the digital realm, there arises a question about the sustainability of this market in its current form.
At its current pace, it seems inevitable that people may lose interest in physical collectibles, gravitating towards digital alternatives due to the speed and convenience of digital transactions. Nevertheless, we are here with a mission to rekindle the passion for physical collectibles.
## What it does 🤖
Our platform empowers users to transform their physical collectibles into digital assets. This not only preserves the value of their physical items but also facilitates easy buying, selling, and trading.
We have the capability to digitize various collectibles with verifiable authenticity, including graded sports/trading cards, sneakers, and more.
## How we built it 👷🏻♂️
To construct our platform, we utilized [NEXT.js](https://nextjs.org/) for both frontend and backend development. Additionally, we harnessed the power of the [thirdweb](https://thirdweb.com/) SDK for deploying, minting, and trading NFTs. Our NFTs are deployed on the Ethereum L2 [Mumbai](https://mumbai.polygonscan.com/) testnet.
`MUMBAI_DIGITIZE_ETH_ADDRESS = 0x6A80AD071932ba92fe43968DD3CaCBa989C3253f
MUMBAI_MARKETPLACE_ADDRESS = [0xedd39cAD84b3Be541f630CD1F5595d67bC243E78](https://thirdweb.com/mumbai/0xedd39cAD84b3Be541f630CD1F5595d67bC243E78)`
Furthermore, we incorporated the Ethereum Attestation Service to verify asset ownership and perform KYC (Know Your Customer) checks on users.
`SEPOLIA_KYC_SCHEMA = 0x95f11b78d560f88d50fcc41090791bb7a7505b6b12bbecf419bfa549b0934f6d
SEPOLIA_KYC_TX_ID = 0x18d53b53e90d7cb9b37b2f8ae0d757d1b298baae3b5767008e2985a5894d6d2c
SEPOLIA_MINT_NFT_SCHEMA = 0x480a518609c381a44ca0c616157464a7d066fed748e1b9f55d54b6d51bcb53d2
SEPOLIA_MINT_NFT_TX_ID = 0x0358a9a9cae12ffe10513e8d06c174b1d43c5e10c3270035476d10afd9738334`
We also made use of CockroachDB and Prisma to manage our database.
Finally, to view all NFTs in 3D 😎, we built a separate platform that's soon-to-be integrated into our app. We scrape the internet to generate all card details and metadata and render it as a `.glb` file that can be seen in 3D!
## Challenges we ran into 🏃🏻♂️🏃🏻♂️💨💨
Our journey in the blockchain space was met with several challenges, as we were relatively new to this domain. Integrating various SDKs proved to be a formidable task. Initially, we deployed our NFTs on Sepolia, but encountered difficulties in fetching data. We suspect that thirdweb does not fully support Sepolia. Ultimately, we made a successful transition to the Mumbai network. We also faced issues with the PSA card website, as it went offline temporarily, preventing us from scraping data to populate our applications.
## Accomplishments that we're proud of 🌄
As a team consisting of individuals new to blockchain technology, and even first-time deployers of smart contracts and NFT minting, we take pride in successfully integrating web3 SDKs into our application. Moreover, users can view their prized possessions in **3-D!**
Overall, we're proud that we managed to deliver a functional minimum viable product within a short time frame. 🎇🎇
## What we learned 👨🏻🎓
Through this experience, we learned the value of teamwork and the importance of addressing challenges head-on. In moments of uncertainty, we found effective solutions through open discussions. Overall, we have gained confidence in our ability to deliver exceptional products as a team.Lastly, we learned to have fun and build things that matter to us.
## What's next for digitize.eth 👀👀👀
Our future plans include further enhancements such as:
* Populating our platform with a range of supported NFTs for physical assets.
* Take a leap of faith and deploy on Mainnet
* Deploy our NFTs on other chains, eg Solana.
Live-demo: <https://bl0ckify.tech>
Github: <https://github.com/idrak888/digitize-eth/tree/main> + <https://github.com/zakariya23/hackathon>
|
## 🤔 Problem Statement
* 55 million people worldwide struggle to engage with their past memories effectively (World Health Organization) and 40% of us will experience some form of memory loss (Alzhiemer's Society of Canada). This widespread struggle with nostalgia emphasizes the critical need for user-friendly solutions. Utilizing modern technology to support reminiscence therapy and enhance cognitive stimulation in this population is essential.
## 💡 Inspiration
* Alarming statistics from organizations like the Alzheimer's Society of Canada and the World Health Organization motivated us.
* Desire to create a solution to assist individuals experiencing memory loss and dementia.
* Urge to build a machine learning and computer vision project to test our skillsets.
## 🤖 What it does
* DementiaBuddy offers personalized support for individuals with dementia symptoms.
* Integrates machine learning, computer vision, and natural language processing technologies.
* Facilitates face recognition, memory recording, transcription, summarization, and conversation.
* Helps users stay grounded, recall memories, and manage symptoms effectively.
## 🧠 How we built it
* Backend developed using Python libraries including OpenCV, TensorFlow, and PyTorch.
* Integration with Supabase for data storage.
* Utilization of Cohere Summarize API for text summarization.
* Frontend built with Next.js, incorporating Voiceflow for chatbot functionality.
## 🧩 Challenges we ran into
* Limited team size with only two initial members.
* Late addition of two teammates on Saturday.
* Required efficient communication, task prioritization, and adaptability, especially with such unique circumstances for our team.
* Lack of experience in combining all these foreign sponsorship technology, as well as limited frontend and fullstack abilities.
## 🏆 Accomplishments that we're proud of
* Successful development of a functional prototype within the given timeframe.
* Implementation of key features including face recognition and memory recording.
* Integration of components into a cohesive system.
## 💻 What we learned
* Enhanced skills in machine learning, computer vision, and natural language processing.
* Improved project management, teamwork, and problem-solving abilities.
* Deepened understanding of dementia care and human-centered design principles.
## 🚀 What's next for DementiaBuddy
* Refining face recognition algorithm for improved accuracy and scalability.
* Expanding memory recording capabilities.
* Enhancing chatbot's conversational abilities.
* Collaborating with healthcare professionals for validation and tailoring to diverse needs.
## 📈 Why DementiaBuddy?
Asides from being considered for the Top 3 prizes, we worked really hard so that DementiaBuddy could be considered to win multiple sponsorship awards at this hackathon, including the Best Build with Co:Here, RBC's Retro-Revolution: Bridging Eras with Innovation Prize, Best Use of Auth0, Best Use of StarkNet, & Best .tech Domain Name. Our project stands out because we've successfully integrated multiple cutting-edge technologies to create a user-friendly and accessible platform for those with memory ailments. Here's how we've met each challenge:
* 💫 Best Build with Co:Here: Dementia Buddy should win the Best Build with Cohere award because it uses Cohere's Summarizing API to make remembering easier for people with memory issues. By summarizing long memories into shorter versions, it helps users connect with their past experiences better. This simple and effective use of Cohere's technology shows how well the project is made and how it focuses on helping users.
* 💫 RBC's Retro-Revolution - Bridging Eras with Innovation Prize: Dementia Buddy seamlessly combines nostalgia with modern technology, perfectly fitting the criteria of the RBC Bridging Eras prize. By updating the traditional photobook with dynamic video memories, it transforms the reminiscence experience, especially for individuals dealing with dementia and memory issues. Through leveraging advanced digital media tools, Dementia Buddy not only preserves cherished memories but also deepens emotional connections to the past. This innovative approach revitalizes traditional memory preservation methods, offering a valuable resource for stimulating cognitive function and improving overall well-being.
* 💫 Best Use of Auth0: We succesfully used Auth0's API within our Next.js frontend to help users login and ensure that our web app maintains a personalized experience for users.
* 💫 Best .tech Domain Name: AMachineLearningProjectToHelpYouTakeATripDownMemoryLane.tech, I can't think of a better domain name. It perfectly describes our project.
|
## Inspiration
The music industry has gone stagnant. Artists at the top stay at the top, and many artists stay out of sight. Looking for new music is difficult because it is either an iteration of the music everyone has already heard or it is lost in the sea of the internet. Not only do the listeners suffer from this– so do the artists. Many work very hard but never get the exposure they deserve. Crescendo seeks to leverage the power of computation to revitalize the flow that existed in the music industry just decades ago.
## How it works
Crescendo uses the k-nearest neighbors algorithm along with collaborative filtering trained on data from SoundCloud to serve music to users by artists that are likely to be good but are still under the radar. As the user interacts with the website, saving certain songs and discarding others, the back-end "learns" the preferences of the user and offers them music liked by other users of similar taste. Collaborative filtering is a common algorithm used by recommender systems to determine the similarity of users with respect to products. Our implementation uses gradient decent to determine which artists our users might most enjoy. In order to escape the chicken-and-egg problem of needing user data to find quality music and needing quality music to draw data-generating users, we trained our collaborative-filtering matrices on scraped SoundCloud user-favorites data.
## Challenges, and what we learned
Turns out machine learning involves some very challenging math. We learned a lot about how machine learning actually works, and how to coordinate different techniques (such as k-nearest neighbors with collaborative filtering) as well as different frameworks (pandas, tensorflow, etc.) in order to build a multi-stage algorithm. As collaborative filtering is a large area of study, with various implementations, we had to read through a lot of papers before diving in to the algorithm we are currently using.
Working with big data from SoundCloud's API was also a challenge, as we often had to wait up to an hour to scrape a few hundred thousand favorites for use in the training step of our machine learning algorithm. We learned that writing algorithms to work on large data sets requires solid conceptual testing (asking yourself "will this work?"), since running the algorithms can take up significant time, a precious resource, and debugging can be a nightmare with thousands of lines of output.
|
partial
|
# 🎉 CoffeeStarter: Your Personal Networking Agent 🚀
Names: Sutharsika Kumar, Aarav Jindal, Tanush Changani & Pranjay Kumar
Welcome to **CoffeeStarter**, a cutting-edge tool designed to revolutionize personal networking by connecting you with alumni from your school's network effortlessly. Perfect for hackathons and beyond, CoffeeStarter blends advanced technology with user-friendly features to help you build meaningful professional relationships.
---
## 🌟 Inspiration
In a world where connections matter more than ever, we envisioned a tool that bridges the gap between ambition and opportunity. **CoffeeStarter** was born out of the desire to empower individuals to effortlessly connect with alumni within their school's network, fostering meaningful relationships that propel careers forward.
---
## 🛠️ What It Does
CoffeeStarter leverages the power of a fine-tuned **LLaMA** model to craft **personalized emails** tailored to each alumnus in your school's network. Here's how it transforms your networking experience:
* **📧 Personalized Outreach:** Generates authentic, customized emails using your resume to highlight relevant experiences and interests.
* **🔍 Smart Alumnus Matching:** Identifies and connects you with alumni that align with your professional preferences and career goals.
* **🔗 Seamless Integration:** Utilizes your existing data to ensure every interaction feels genuine and impactful.
---
## 🏗️ How We Built It
Our robust technology stack ensures reliability and scalability:
* **🗄️ Database:** Powered by **SQLite** for flexible and efficient data management.
* **🐍 Machine Learning:** Developed using **Python** to handle complex ML tasks with precision.
* **⚙️ Fine-Tuning:** Employed **Tune** for meticulous model fine-tuning, ensuring optimal performance and personalization.
---
## ⚔️ Challenges We Faced
Building CoffeeStarter wasn't without its hurdles:
* **🔒 SQLite Integration:** Navigating the complexities of SQLite required innovative solutions.
* **🚧 Firewall Obstacles:** Overcoming persistent firewall issues to maintain seamless connectivity.
* **📉 Model Overfitting:** Balancing the model to avoid overfitting while ensuring high personalization.
* **🌐 Diverse Dataset Creation:** Ensuring a rich and varied dataset to support effective networking outcomes.
* **API Integration:** Working with various API's to get as diverse a dataset and functionality as possible.
---
## 🏆 Accomplishments We're Proud Of
* **🌈 Diverse Dataset Development:** Successfully created a comprehensive and diverse dataset that enhances the accuracy and effectiveness of our networking tool.
* Authentic messages that reflect user writing styles which contributes to personalization.
---
## 📚 What We Learned
The journey taught us invaluable lessons:
* **🤝 The Complexity of Networking:** Understanding that building meaningful connections is inherently challenging.
* **🔍 Model Fine-Tuning Nuances:** Mastering the delicate balance between personalization and generalization in our models.
* **💬 Authenticity in Automation:** Ensuring our automated emails resonate as authentic and genuine, without echoing our training data.
---
## 🔮 What's Next for CoffeeStarter
We're just getting started! Future developments include:
* **🔗 Enhanced Integrations:** Expanding data integrations to provide even more personalized networking experiences and actionable recommendations for enhancing networking effectiveness.
* **🧠 Advanced Fine-Tuned Models:** Developing additional models tailored to specific networking needs and industries.
* **🤖 Smart Choosing Algorithms:** Implementing intelligent algorithms to optimize alumnus matching and connection strategies.
---
## 📂 Submission Details for PennApps XXV
### 📝 Prompt
You are specializing in professional communication, tasked with composing a networking-focused cold email from an input `{student, alumni, professional}`, name `{your_name}`. Given the data from the receiver `{student, alumni, professional}`, your mission is to land a coffee chat. Make the networking text `{email, message}` personalized to the receiver’s work experience, preferences, and interests provided by the data. The text must sound authentic and human. Keep the text `{email, message}` short, 100 to 200 words is ideal.
### 📄 Version Including Resume
You are specializing in professional communication, tasked with composing a networking-focused cold email from an input `{student, alumni, professional}`, name `{your_name}`. The student's resume is provided as an upload `{resume_upload}`. Given the data from the receiver `{student, alumni, professional}`, your mission is to land a coffee chat. Use the information from the given resume of the sender and their interests from `{website_survey}` and information of the receiver to make this message personalized to the intersection of both parties. Talk specifically about experiences that `{student, alumni, professional}` would find interesting about the receiver `{student, alumni, professional}`. Compare the resume and other input `{information}` to find commonalities and make a positive impression. Make the networking text `{email, message}` personalized to the receiver’s work experience, preferences, and interests provided by the data. The text must sound authentic and human. Keep the text `{email, message}` short, 100 to 200 words is ideal. Once completed with the email, create a **1 - 10 score** with **1** being a very generic email and **10** being a very personalized email. Write this score at the bottom of the email.
## 🧑💻 Technologies Used
* **Frameworks & Libraries:**
+ **Python:** For backend development and machine learning tasks.
+ **SQLite:** As our primary database for managing user data.
+ **Tune:** Utilized for fine-tuning our LLaMA3 model.
* **External/Open Source Resources:**
+ **LLaMA Model:** Leveraged for generating personalized emails.
+ **Various Python Libraries:** Including Pandas for data processing and model training.
|
## Inspiration
During modern times, the idea underlying a facemask is simple--if more people wear them, less people will get sick. And while it holds true, this is an oversimplification: the number of lives saved is dependent not only on the quantity, but also on the quality of the masks which people wear (as evidenced by recent research by the CDC). However, due to an insufficient supply of N95 masks, healthcare workers are forced to wear cloth or surgical masks which both leak from the sides, increasing the risk of infection, and are arduous to breathe through for extended physical exertion.
## What it does
Maskus is the first mask bracket and fitter in one - custom-fitted and printed using accessible technology. It is designed to improve the baseline quality of facemasks around the world, with its first and most pressing use is for healthcare workers. The user starts taking a picture of their face through their computer/smartphone camera. We then generate an accurate 3D representation of the user's face and design a tight-fitting 3D printable mask bracket specifically tailored to the user's face contours. Within seconds, we can render the user's custom mask onto the user's face in augmented reality in realtime. The user can then either download their custom mask in a format ready for 3D printing, or set up software to print the mask automatically. We also have an Arduino Nano that alerts the user if the mask is secured properly, or letting them now it needs to be readjusted.
## How we built it
After the user visits the Maskus website, our React frontend sends a POST request to a Python Flask backend. The server receives the image, decodes it, and feeds it into a state of the art machine learning 3D face reconstruction model (3DDFA). The resultant 3D face model then goes through some preprocessing, which compresses the 3D data to improve performance. Another script then extracts the user's face contour/outline from the 3D model and builds a custom mask bracket with programmable CAD software. On the web app, the user gets to see both their own 3D face mesh as well as an AR rendering of the custom fitted mask onto their face (using React and three.js). Lastly, this data is saved to a standard 3D printing file format (.obj) and returned to the user so they can print it wherever they like. In terms of our hardware, the mask's alert system comprises of an Arduino Nano with a piezo buzzer and two push buttons (left and right side of face) wired in series. In order to get the push buttons to engage when the mask is worn, we created custom 3D parts that create a larger area for the buttons to be pushed.
## Challenges we ran into
This project was touched many disciplines, and posed many difficulties. We were determined to provide the user with the ability to see how their mask would fit them in real time using AR. In order to do this, we needed a way to visualize 3D models in the web. This proved difficult due to many misleading resources and weak documentation. Simple things (like figuring out how to get a 3D model to stop rotating) took much longer than they should have, simply because the frameworks were obfuscated. AR was also very difficult to implement, particularly due to the fact that it is a new technology and the existing frameworks for it are not yet mature. Our project is one of the first we've seen placing 3D models (not images) onto user faces.
## Accomplishments that we're proud of
From the machine learning side of the project, 3D face reconstruction is a very difficult problem. Luckily, our team was able to succesfully implement and use the 3DDFA state of the art machine learning model for face reconstruction. Installing and configuring the neccessary Python packages and virtual environments posed a challenge at the start, but we were able to quickly overcome this and get a working machine learning pipeline. Being able to solve this problem early on in the hackathon gave our team more time to focus on other problems, such as web 3D model visualization and constructing the facemask from our 3D face model.
## What we learned
Amusingly, during this project we found that things which were supposed to be difficult turned out to be easy to implement and, conversely, the easy parts turned out to be hard. Things like front end design and integrating web frameworks turned out to be some of the most challenging parts of the project, whereas things like machine learning were easier than expected. A takeaway is that the feasibility of quickly building a project should be based not only on the difficulty of the task, but also on the quality of existing resources which can be used to build it. Good frameworks make implementing difficult projects much easier.
## What's next for Maskus
Aside from refactoring the code and improving webpage design, we see several things for the project going forward. Perhaps the biggest points is developing a reliable algorithm to extract the facemask outline from a 3D face model. The one the group currently has works most of the time, but serves as the bottleneck of the system in terms of facial recognition accuracy. The UI design can be improved as well. Lastly, threeJS was found to be a pain, especially when trying to integrate it with React. It would be worth exploring simpler JavaScript frameworks. We would also love to add more functionality to the Arduino in the future, making it a 'smarter' mask. We hope to add sensors like AQS (Air Quality Sensor), creating alerts if the mask has been worn too long and needs to be replaced, and status LEDs in order to visually tell your mask is secure.
In terms of future growth, Markus can comfortably be deployed as a web app and used by healthcare workers around the world in order to decrease risk of COVID transmission. It is a low cost solution designed to work with existing masks and improve upon them. Opening up the software to open source contribution is a potential way to grow, and we hope it would lead to very fast progress.
|
**Meet f.low, the intelligent audio control system that knows your surroundings.**
## Inspiration
Have you ever tried to watch a movie on the bus? Study in public? Listen to music while commuting?
We're guessing you have. And, by extension, we're guessing you've had to deal with the frustrating experience of constantly adjusting the volume to accommodate for your changing environment. Everyday distractions like crying babies and noisy neighbors hinder your productivity, your patience, and the ***sick fiya*** you're dropping in your playlist -- but they no longer need to be sources of stress.
## What it Does
Using the built in microphone on your Mac OS X device, f.low is able to detect how loud your environment is and dynamically adjust your volume on-the-fly, keeping your listening experience consistent.
Mapping microphone input power to decibel values using our fitting algorithm, as well as letting you set a maximum and minimum volume for f.low to work with, we achieve the sound you want, **all the time**.
## How We Built It
f.low is currently available on Mac OS X, and it's just a quick and easy port away from iOS. We've developed it using Swift and Xcode, making use of the hardware existing on every Mac and iPhone.
## Challenges
Achieving a natural, consistent sound is key to listening experience, so great care was put into analyzing and optimizing the data gathered from the environment and achieving the most natural volume control.
## What's Next for f.low
Of course there is much more in store and many ideas that need exploring: further optimizing user experience, improving the validity of our detecting algorithm, and re-vamping the UI are three challenges we'd love to tackle in the future.
[www.justgetflow.tech](http://www.justgetflow.tech)
|
partial
|
## What it does
MusiCrowd is an interactive democratic music streaming service that allows individuals to vote on what songs they want to play next (i.e. if three people added three different songs to the queue the song at the top of the queue will be the song with the most upvotes). This system was built with the intentions of allowing entertainment venues (pubs, restaurants, socials, etc.) to be inclusive allowing everyone to interact with the entertainment portion of the venue.
The system has administrators of rooms and users in the rooms. These administrators host a room where users can join from a code to start a queue. The administrator is able to play, pause, skip, and delete and songs they wish. Users are able to choose a song to add to the queue and upvote, downvote, or have no vote on a song in queue.
## How we built it
Our team used Node.js with express to write a server, REST API, and attach to a Mongo database. The MusiCrowd application first authorizes with the Spotify API, then queries music and controls playback through the Spotify Web SDK. The backend of the app was used primarily to the serve the site and hold an internal song queue, which is exposed to the front-end through various endpoints.
The front end of the app was written in Javascript with React.js. The web app has two main modes, user and admin. As an admin, you can create a ‘room’, administrate the song queue, and control song playback. As a user, you can join a ‘room’, add song suggestions to the queue, and upvote / downvote others suggestions. Multiple rooms can be active simultaneously, and each room continuously polls its respective queue, rendering a sorted list of the queued songs, sorted from most to least popular. When a song ends, the internal queue pops the next song off the queue (the song with the most votes), and sends a request to Spotify to play the song. A QR code reader was added to allow for easy access to active rooms. Users can point their phone camera at the code to link directly to the room.
## Challenges we ran into
* Deploying the server and front-end application, and getting both sides to communicate properly.
* React state mechanisms, particularly managing all possible voting states from multiple users simultaneously.
* React search boxes.
* Familiarizing ourselves with the Spotify API.
* Allowing anyone to query Spotify search results and add song suggestions / vote without authenticating through the site.
## Accomplishments that we're proud of
Our team is extremely proud of the MusiCrowd final product. We were able to build everything we originally planned and more. The following include accomplishments we are most proud of:
* An internal queue and voting system
* Frontloading the development & working hard throughout the hackathon > 24 hours of coding
* A live deployed application accessible by anyone
* Learning Node.js
## What we learned
Garrett learned javascript :) We learned all about React, Node.js, the Spotify API, web app deployment, managing a data queue and voting system, web app authentication, and so so much more.
## What's next for Musicrowd
* Authenticate and secure routes
* Add IP/device tracking to disable multiple votes for browser refresh
* Drop songs that are less than a certain threshold of votes or votes that are active
* Allow tv mode to have current song information and display upcoming queue with current number of votes
|
## Inspiration
In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other.
## What it does
**Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?)
Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective.
After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity.
## How we built it
**Tech Stack**
* Front-end/UI
+ Flutter and dart
+ Adobe XD
* Backend
+ Firebase
- Cloud Firestore
- Cloud Storage
- Firebase Authentication
**Details**
* Front end was built after developing UI mockups/designs
* Heavy use of advanced widgets and animations throughout the app
* Creation of multiple widgets that are reused around the app
* Backend uses gmail authentication with firebase.
* Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages.
* Images are stored in firebase storage to keep the source files together.
## Challenges we ran into
* Initially connecting Firebase to the front-end
* Managing state while implementing multiple complicated animations
* Designing backend and mapping users with each other and allowing them to chat.
## Accomplishments that we're proud of
* The user interface we made and animations on the screens
* Sign up and login using Firebase Authentication
* Saving user info into Firestore and storing images in Firebase storage
* Creation of beautiful widgets.
## What we're learned
* Deeper dive into State Management in flutter
* How to make UI/UX with fonts and colour palates
* Learned how to use Cloud functions in Google Cloud Platform
* Built on top of our knowledge of Firestore.
## What's next for Change My Mind
* More topics and User settings
* Implementing ML to match users based on maturity and other metrics
* Potential Monetization of the app, premium analysis on user conversations
* Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC.
|
## Inspiration
Despite the advent of the information age, misinformation remains a big issue in today's day and age. Yet, mass media accessibility for newer language speakers, such as younger children or recent immigrants, remains lacking. We want these people to be able to do their own research on various news topics easily and reliably, without being limited by their understanding of the language.
## What it does
Our Chrome extension allows users to shorten and simplify and any article of text to a basic reading level. Additionally, if a user is not interested in reading the entire article, it comes with a tl;dr feature. Lastly, if a user finds the article interesting, our extension will find and link related articles that the user may wish to read later. We also include warnings to the user if the content of the article contains potentially sensitive topics, or comes from a source that is known to be unreliable.
Inside of the settings menu, users can choose a range of dates for the related articles which our extension finds. Additionally, users can also disable the extension from working on articles that feature explicit or political content, alongside being able to disable thumbnail images for related articles if they do not wish to view such content.
## How we built it
The front-end Chrome extension was developed in pure HTML, CSS and JavaScript. The CSS was done with the help of [Bootstrap](https://getbootstrap.com/), but still mostly written on our own. The front-end communicates with the back-end using REST API calls.
The back-end server was built using [Flask](https://flask.palletsprojects.com/en/2.0.x/), which is where we handled all of our web scraping and natural language processing.
We implemented text summaries using various NLP techniques (SMMRY, TF-IDF), which were then fed into the OpenAI API in order to generate a simplified version of the summary. Source reliability was determined using a combination of research data provided by [Ad Fontes Media](https://www.adfontesmedia.com/) and [Media Bias Check](https://mediabiasfactcheck.com/).
To save time (and spend less on API tokens), parsed articles are saved in a [MongoDB](https://www.mongodb.com/) database, which acts as a cache and saves considerable time by skipping all the NLP for previously processed news articles.
Finally, [GitHub Actions](https://github.com/features/actions) was used to automate our builds and deployments to [Heroku](https://www.heroku.com/), which hosted our server.
## Challenges we ran into
Heroku was having issues with API keys, causing very confusing errors which took a significant amount of time to debug.
In regards to web scraping, news websites have wildly different formatting which made extracting the article's main text difficult to generalize across different sites. This difficulty was compounded by the closure of many prevalent APIs in this field, such as Google News API which shut down in 2011.
We also faced challenges with tuning the prompts in our requests to OpenAI to generate the output we were expecting. A significant amount of work done in the Flask server is pre-processing the article's text, in order to feed OpenAI a more suitable prompt, while retaining the meaning.
## Accomplishments that we're proud of
This was everyone on our team's first time creating a Google Chrome extension, and we felt that we were successful at it. Additionally, we are happy that our first attempt at NLP was relatively successful, since none of us have had any prior experience with NLP.
Finally, we slept at a Hackathon for the first time, so that's pretty cool.
## What we learned
We gained knowledge of how to build a Chrome extension, as well as various natural language processing techniques.
## What's next for OpBop
Increasing the types of text that can be simplified, such as academic articles. Making summaries and simplifications more accurate to what a human would produce.
Improving the hit rate of the cache by web crawling and scraping new articles while idle.
## Love,
## FSq x ANMOL x BRIAN
|
winning
|

## Inspiration
Getting engagement is hard. People only read about 20% of the text on the average page.

After viewing the percent of article content viewed it shows that most readers scroll to about the 50 percent mark, or the 1,000th pixel, in Slate stories (a news platform).
This is alarming. Suppose if a company is writing an article about an event that they have sponsored. Having negligible engagement is not valued within the goals of any company spending money on marketing purposes.
Rather, what if this article was brought into a one minute format which would bring about to a much better engagement rate as compared to long pieces of text.
## What it does ⚡️
nu:here, an online platform for people of all age levels to create and distribute customizable videos based on Wikipedia articles created via Artificial Intelligence 👀. With our platform, we allow users to customize many different video aspects within the platform and share it with the world.
**The process for the user:**
1. User searches for a Wikipedia article on our platform
2. The user can start our video generation platform by specifying the length of the video that is wanted
3. The user can specify the formality of the video depending on what the target audience is (For the classroom, for sharing information on TikTok & Instagram, etc.)
4. The user can specify what voice model they want to use for the audio, using IBM’s text-to-speech API, the possibilities are endless
5. The user can then specify what kind of background music they want playing in the video
6. Once this step for the user is done, we are able to generate a short version of the Wikipedia article via co:here, create audio for the video via Watson AI, and generate keywords to use while finding GIFs, videos, and images on Pexels and Tenor, and put them in a video format.
## How we built it ⚡️
We mashed up many cutting-edge services to help bring our project to life.
* Firebase Storage - Store Audio files From Watson in the Cloud ☁️
* Watson Text-to-Speech - Generate audio for the video 🎵
* Wikipedia API - Get all the information from Wikipedia ℹ️
* co:here Generate API - Generate summaries for Wikipedia articles. The generate API is also used to find the best visual elements for the video. 🤖
* GPT-3 - Help generate training data for co:here at scale 🤖
* Pexels API - Find images and videos to put into our generated video 🖼
* Remotion - React library to help us play and assist in generating a video 🎥
* Tailwind CSS - CSS Framework ⭐️
* React.js - Frontend Library ⚛️
* Node.js & Express.js - Frameworks 🔐
* Figma - Design 🎨
## Challenges we ran into ⚡️
### co:here
We were determined to use co:here in this project, but we ran into a few major obstacles.
First, every call to co:here’s `generate` API had to contain no more than 2048 tokens. Our goal was to summarise whole Wikipedia articles, which often contain far more than 2048 words. To get around this, we developed complex algorithms to summarize sections of articles, then summarize groups of summaries, and so on.
It was difficult to preserve accuracy during this process, because the models were not perfect. We tried to engineer prompts using few-shot learning methods to teach our model what a good summary was. We even used GPT3 to generate training examples at scale! However, we were always limited by the 2048-token limit. Training data uses up capacity that we need for input.
A strange consequence of few-shot learning is that the model would pick up on the contents and cause our training data to bleed into our summaries. For example, one of our training summaries was a paragraph about Waterloo. When we asked co:here to summarize an article about geological faults, it wrongly claimed that there was one in Waterloo.
We had a desire to fit our videos into a certain amount of viewing time. We tried to restrict the duration using a token limit, but co:here does not consider the limit when planning its summaries. It sometimes goes into too much detail and misses out on points from later on in the text.
## Accomplishments that we're proud of ⚡️
* We are proud of using the co:here platform
* We are proud that we will be able to start sharing this platform after this hackathon is over
* We are proud that people will be able to use this
* We are proud of overcoming our obstacles
* We were able to accomplish all functionalities
* Most of all we had **fun**!
## What we learned ⚡️
We learned so much throughout the course of the hackathon. Natural Language Processing is not a silver bullet. In order to get our models to do what we want, we have to think like them. We didn’t have much experience using NLP but now we will continue to explore more applications for it.
## What's next for nu:here ⚡️
Adding features for users to customize and share videos is top priority for us on the engineering side. At the same time, we must address the elephant in the room: accuracy. In our quest to make information accessible and digestible, we must try as hard as we can to guard our users from mis-summarizations. Better models and user feedback can help us get there.
**View Video Demo Here (if the Youtube Video does not work): [Demo](https://cdn.discordapp.com/attachments/1019611034971013171/1021027599285231716/2022-09-18_07-52-35_Trim.mp4)**
|
## **Inspiration:**
Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn.
## **What it does:**
Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations.
## **How we built it**:
We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders.
## **Challenges we ran into**:
Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning.
## **Accomplishments that we're proud of**:
Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly.
## **What we learned**:
We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
|
## Inspiration
The amount of data in the world today is mind-boggling. We are generating 2.5 quintillion bytes of data every day at our current pace, but the pace is only accelerating with the growth of IoT.
We felt that the world was missing a smart find-feature for videos. To unlock heaps of important data from videos, we decided on implementing an innovative and accessible solution to give everyone the ability to access important and relevant data from videos.
## What it does
CTRL-F is a web application implementing computer vision and natural-language-processing to determine the most relevant parts of a video based on keyword search and automatically produce accurate transcripts with punctuation.
## How we built it
We leveraged the MEVN stack (MongoDB, Express.js, Vue.js, and Node.js) as our development framework, integrated multiple machine learning/artificial intelligence techniques provided by industry leaders shaped by our own neural networks and algorithms to provide the most efficient and accurate solutions.
We perform key-word matching and search result ranking with results from both speech-to-text and computer vision analysis. To produce accurate and realistic transcripts, we used natural-language-processing to produce phrases with accurate punctuation.
We used Vue to create our front-end and MongoDB to host our database. We implemented both IBM Watson's speech-to-text API and Google's Computer Vision API along with our own algorithms to perform solid key-word matching.
## Challenges we ran into
Trying to implement both Watson's API and Google's Computer Vision API proved to have many challenges. We originally wanted to host our project on Google Cloud's platform, but with many barriers that we ran into, we decided to create a RESTful API instead.
Due to the number of new technologies that we were figuring out, it caused us to face sleep deprivation. However, staying up for way longer than you're supposed to be is the best way to increase your rate of errors and bugs.
## Accomplishments that we're proud of
* Implementation of natural-language-processing to automatically determine punctuation between words.
* Utilizing both computer vision and speech-to-text technologies along with our own rank-matching system to determine the most relevant parts of the video.
## What we learned
* Learning a new development framework a few hours before a submission deadline is not the best decision to make.
* Having a set scope and specification early-on in the project was beneficial to our team.
## What's next for CTRL-F
* Expansion of the product into many other uses (professional education, automate information extraction, cooking videos, and implementations are endless)
* The launch of a new mobile application
* Implementation of a Machine Learning model to let CTRL-F learn from its correct/incorrect predictions
|
partial
|
## Inspiration
Inspired by personal experience of commonly getting separated in groups and knowing how inconvenient and sometimes dangerous it can be, we aimed to create an application that kept people together. We were inspired by how interlinked and connected we are today by our devices and sought to address social issues while using the advancements in decentralized compute and communication. We also wanted to build a user experience that is unique and can be built upon with further iterations and implementations.
## What it does
Huddle employs mesh networking capability to maintain a decentralized network among a small group of people, but can be scaled to many users. By having a mesh network of mobile devices, Huddle manages the proximity of its users. When a user is disconnected, Huddle notifies all of the devices on its network, thereby raising awareness, should someone lose their way.
The best use-case for Huddle is in remote areas where cell-phone signals are unreliable and managing a group can be cumbersome. In a hiking scenario, should a unlucky hiker choose the wrong path or be left behind, Huddle will reduce risks and keep the team together.
## How we built it
Huddle is an Android app built with the RightMesh API. With many cups of coffee, teamwork, brainstorming, help from mentors, team-building exercises, and hours in front of a screen, we produced our first Android app.
## Challenges we ran into
Like most hackathons, our first challenge was deciding on an idea to proceed with. We employed the use of various collaborative and brainstorming techniques, approached various mentors for their input, and eventually we decided on this scalable idea.
As mentioned, none of us developed an Android environment before, so we had a large learning curve to get our environment set-up, developing small applications, and eventually building the app you see today.
## Accomplishments that we're proud of
One of our goals was to be able to develop a completed product at the end. Nothing feels better than writing this paragraph after nearly 24 hours of non-stop hacking.
Once again, developing a rather complete Android app without any developer experience was a monumental achievement for us. Learning and stumbling as we go in a hackathon was a unique experience and we are really happy we attended this event, no matter how sleepy this post may seem.
## What we learned
One of the ideas that we gained through this process was organizing and running a rather tightly-knit developing cycle. We gained many skills in both user experience, learning how the Android environment works, and how we make ourselves and our product adaptable to change. Many design changes occured, and it was great to see that changes were still what we wanted and what we wanted to develop.
Aside from the desk experience, we also saw many ideas from other people, different ways of tackling similar problems, and we hope to build upon these ideas in the future.
## What's next for Huddle
We would like to build upon Huddle and explore different ways of using the mesh networking technology to bring people together in meaningful ways, such as social games, getting to know new people close by, and facilitating unique ways of tackling old problems without centralized internet and compute.
Also V2.
|
## Inspiration
The moment we formed our team, we all knew we wanted to create a hack for social good which could even aid in a natural calamity. According to the reports of World Health Organization, there are 285 million people worldwide who are either blind or partially blind. Due to the relevance of this massive issue in our world today, we chose to target our technology towards people with visual impairments. After discussing several ideas, we settled on an application to ease communication and navigation during natural calamities for people with visual impairment.
## What it does
It is an Android application developed to help the visually impaired navigate through natural calamities using peer to peer audio and video streaming by creating a mesh network that does not rely on wireless or cellular connectivity in the event of a failure. In layman terms, it allows people to talk with and aid each other during a disastrous event in which the internet connection is down.
## How we built it
We decided to build an application on the Android operating system. Therefore, we used the official integrated development environment: Android Studio. By integrating Google's nearby API into our java files and using NFC and bluetooth resources, we were able to establish a peer to peer communication network between devices requiring no internet or cellular connectivity.
## Challenges we ran into
Our entire concept of communication without an internet or cellular network was a tremendous challenge. Specifically, the two greatest challenges were to ensure the seamless integration of audio and video between devices. Establishing a smooth connection for audio was difficult as it was being streamed live in real time and had to be transferred without the use of a network. Furthermore, transmitting a video posed to be even a greater challenge. Since we couldn’t transmit data of the video over the network either, we had to convert the video to bytes and then transmit to assure no transmission loss. However, we persevered through these challenges and, as a result, were able to create a peer to peer network solution.
## Accomplishments that we're proud of
We are all proud of having created a fully implemented application that works across the android platform. But we are even more proud of the various skills we gained in order to code a successful peer to peer mesh network through which we could transfer audio and video.
## What we learned
We learned how to utilize Android Studio to create peer to peer mesh networks between different physical devices. More importantly, we learned how to live stream real time audio and transmit video with no internet connectivity.
## What's next for Navig8.
We are very proud of all that we have accomplished so far. However, this is just the beginning. Next, we would like to implement location-based mapping. In order to accomplish this, we would create a floor plan of a certain building and use cameras such as Intel’s RealSense to guide visually impaired people to the nearest exit point in a given building.
|
## Inspiration
Many campus organizations struggle to gain visibility through one medium alone, resorting to spreading the word via flyers, slack, discord, emails, handing out swag, and more. This process is time-consuming and expensive, with no guarantee of effectiveness. There are also many social events that actively seek to attract a larger crowd but have no means of doing so.
From a user perspective, finding out when and where events on-campus are happening should not require hopping to multiple apps, websites, or texts. This information should be both centralized and organized in a manner that will make sense in your own schedule.
## What it does
Huddle is a one-stop shop for discovering student-run events, from parties, to school org meetings, to fundraisers, giveaways, and more!
Want to attend an event? Add it to your calendar! These events will pop-up on an interactive map so you can see the pinpoint of where they are on-campus, a ML-generated efficiency map. Event details will appear on your dashboard.
With the tap of a screen, you can control and organize your social life. Our authentication system enables only students from your school to post and view events, making Huddle personalized, reliable, and convenient on-the-go.
## How we built it
@nairfreya: Used react-native frontend skills to develop a timeline screen for the Huddle app. The page displays the title, time, date, and location of user-created events using FlatList and State Hooks for real-time changes.
@euliu: Used Firebase to implement a working home navigator screen with filters and pinpoints. Designed and created the welcome, log-in, and "create new event" pages on react-native, utilizing MapView. Attempted to deploy the app on TestFlight.
@celestion && @jkaus: marketing and ideation for Huddle.
## Challenges we ran into
* Incorporating Google Auth proved to be a lengthier process than originally estimated; though we still see this as a core component of Huddle, we decided not to include this functionality in the interest of time.
* Though we were set to deploy our app onto TestFlight, the upload glitched and could not be resolved within the timeframe of our hackathon. While this was disappointing, we are delighted to have seen our app come to life and can save deployment for the future.
## Accomplishments that we're proud of
* Designing a working prototype that takes in user input, generating a flow of events that they have created themselves
* Interactive navigation on the home map screen
* Clicking each pinpoint to see event details
* Gaining ~30 people on a waitlist for our app, along with feedback
## What we learned
* There's always an alternative when things go awry :o
* Our interview with YC was a nudge towards the innovative appeal of our idea—what sets Huddle apart from similar apps?
+ One difference is that ideally, Huddle would map out your day as efficiently as possible based on personal preferences. This service will provide a map around campus with the most efficient route to basically autopilot your day.
+ Another significant difference is that apps like these have not been designed to be school-specific. Currently, there is no single app that exists across *multiple* schools and their student populations to enhance social event visibility, scheduling, or route optimization. Huddle provides a powerfully new level of convenience, in only a couple taps.
## What's next for Huddle
* use an ML to chart an 'optimized route' between your selected events
* stories / post library for each event location
+ tags
* 'suggested for you' tab based on interests
* creating private vs. public events
* as suggested by user feedback, add a “connect with friends” feature, to see what they plan to attend!
+ can use Twilio to implement a related notification system
* monetization
* gamification of the app (i.e. a point system for attending certain events, and a leaderboard among friends)
|
partial
|

## Motivation
There are reports across university campuses and cities alike that bike thefts are at an all-time high. You never think it's going to happen to you until it does. My bike was once stolen as a child and the only reason I was able to find the people responsible for taking it was because a few friends saw that some strangers had taken it. We wanted to integrate this watchful eye into the bike itself.
Enter VectorPI.
## What it does
With the intention of being integrated into the framework of the handlebars of a bike, VectorPI is able to detect how close someone is to the bike and if the bike is moving, track its location, and take photo/video of the potential thief all in real time. Visualize this data through VPI's website that is also mobile-friendly.
## How we built it
Our application of our IoT protective bike security systems consists of specific integrations between software and hardware to increase its effectiveness. As far as hardware, our security measures consist of an HC-SR04 Ultrasonic Sensor and a BNO555 orientation sensor. These specific sensors allow us to detect if someone is approaching the bike or directly above it for prolonged period of time. Once this specific motion is detected, the built in camera begins recording video and taking pictures of the thief as well as alerting the owner of the bicycle/vehicle. To ensure that the security system is not reporting a false negative, the BNO555's built in gyroscope detects heavy movement in the bike's orientation, which is indicative of the bike being moved or transported.
The algorithm developed by the team implements GPS tracking of the vehicle, which can be used by authorities to pinpoint its location. As an added measure of security and convenience, we plan to add a fingerprint scanner to enable and deactivate VectorPI as necessary.
All of the sensors are connected to an Arduino UNO and then interfaced to a Raspberry Pi so that it can collect and manage the data into a cloud database (we used firebase), as well as record video when necessary. Therefore, our Python code on the Raspberry Pi is responsible for all information being delivered.
Finally, our web app connects to the database and consists of a data visualization dashboard, giving the owner all necessary information on the position, security, and whereabouts of their bicycle.
|
## Inspiration
In the world where technology is intricately embedded into our lives, security is an exciting area where internet devices can unlock the efficiency and potential of the Internet of Things.
## What it does
Sesame is a smart lock that uses facial recognition in order to grant access. A picture is taken from the door and a call is made to a cloud service in order to authenticate the user. Once the user has been authenticated, the door lock opens and the user is free to enter the door.
## How we built it
We used a variety of technologies to build this project. First a Raspberry Pi is connected to the internet and has a servo motor, a button and a camera connected to it. The Pi is running a python client which makes call to a Node.js app running on IBM Bluemix. The app handles requests to train and test image classifiers using the Visual Recognition Watson. We trained a classifier with 20 pictures of each of us and we tested the classifier to unseen data by taking a new picture through our system. To control the lock we connected a servo to the Raspberry Pi and we wrote C with the wiringPi library and PWM to control it. The lock only opens if we reach an accuracy of 70% or above. We determined this number after several tests. The servo moves the lock by using a 3d-printed adapter that connects the servo to the lock.
## Challenges we ran into
We wanted to make our whole project on python, by using a library for the GPIO interface of the Pi and OpenCV for the facial recognition. However, we missed some OpenCV packages and we did not have the time to rebuild the library. Also the GPIO library on python was not working properly to control the servo motor. After encountering these issues, we moved the direction of our project to focus on building a Node.js app to handle authentication and the Visual Recognition service to handle the classification of users.
## Accomplishments that we're proud of
What we are all proud of is that in just one weekend, we learned most of the skills required to finish our project. Ming learned 3D modeling and printing, and to program the GPIO interface on the Pi. Eddie learned the internet architecture and the process of creating a web app, from the client to the server. Atl learned how to use IBM technologies and to adapt to the unforeseen circumstances of the hackathon.
## What's next for Sesame
The prototype we built could be improved upon by adding additional features that would make it more convenient to use. Adding a mobile application that could directly send the images from an individual’s phone to Bluemix would make it so that the user could train the visual recognition application from anywhere and at anytime. Additionally, we have plans to discard the button and replace it with a proximity sensor so that the camera is efficient and only activates when an individual is present in front of the door.
|
## Inspiration
The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect.
## What it does
Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes.
## How we built it
We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps.
## Challenges we ran into
Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone.
## Accomplishments that we're proud of
We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances.
## What we learned
Locationing is hard! Especially indoors or over short distances.
Firebase’s realtime database was extremely easy to use and very performant
Distributing the data processing between the server and client is a balance worth playing with
## What's next for Hotspot
Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps.
There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
|
losing
|
## Inspiration
A countless number of people are affected by mental health and have their lives drastically impacted. Suicide accounts for a quarter of all deaths in young people, most suicide attempts come from lower-income individuals and sessions to help individuals suffering from mental health problems are costly; ranging from $100 to $200 per session.
We designed a product that makes mental health support accessible to **ANYONE** at **ANY TIME**.
## What it does
The software provides a text interface and a voice interface for communication with the end-user (depending on their preference). Individuals suffering from mental health problems can speak with a conversational AI about their problems. This can circumvent the problem of long wait times to see therapists or people feeling uncomfortable speaking to another human about their problems.
## How we built it
The back-end conversational AI uses a GPT transformer-based approach to converse. A pre-trained conversational GPT model was used along with transfer learning to fine-tune the model to behave similar to a therapist. We used a dataset with 800 therapist questions and answers to provide additional training for the model.
We used a Flask API to connect this back-end to our front end.
The front-end consists of a React-Native mobile application that is very simple to use. It incorporates a Twilio online text communication platform and a Dasha.AI voice agent to interface with the conversational AI.
## Challenges we ran into
* Time management
* Getting the correct format for transfer learning
* Setting up Twilio and Dasha.AI
## Accomplishments that we're proud of
We provide a product that can help other people and this is something we couldn't be more proud of.
## What we learned
* Communication is important in dev projects
* Splitting up the work is necessary - don't step on each others' feet
* Pre-trained conversational AI models are powerful as-is
* There are many resources out there for end-user interaction such as Twilio and Dahsa.AI
## What's next for MentAIly
We are committed to continuing the development of MentAIly. We look to improve the back-end through additional learning for conversational AI. We also hope to improve the user interface after receiving feedback.
|
## Inspiration
We noticed that there is no current software that is able to spell-check images. When we learned about the Adobe Challenge, we saw a possible blend of these two ideas. In Adobe, many people use graphics and PDFs and being able to proofread these elements can give individuals peace of mind. This can be applied to graph captions, written reports, posters and more.
## What it does
Takes an image with text, extracts the text and checks the spelling of the text mapping the incorrect spelling to the correct spelling.
## How we built it
We used the Adobe add-on API, Google Vision API, and a language database to see text in images, spell check it, and input it into Adobe.
## Challenges we ran into
Combining the front end to the back end. We were not able to call the functions used for spell check and text extraction in the index.js file. We believe it is because of the Adobe SDK import. We tried to solve this bug for hours and tried many methods but we still were not successful.
## Accomplishments that we're proud of
For the first time, we all used and called cloud APIs and configured all of the authentications. We were able to complete our backend and our front end. We were also able to figure out and create a great Adobe addon plugin.
## What we learned
Learned all about Javascript, HTML CSS and Google Cloud API implementation. Created and used the adobe plugin development interface, Adobe CLI and SDK.
## What's next for SpellPix
We would love to be able to combine our front end to our backend and get it to work on the Adobe add-ons. We also would like to be able to highlight the text that is incorrect and have a popup menu to suggest corrections that the user can click and change. On the image itself, progress has been made to highlight and box the words.
|
## Story
Mental health is a major issue especially on college campuses. The two main challenges are diagnosis and treatment.
### Diagnosis
Existing mental health apps require the use to proactively input their mood, their thoughts, and concerns. With these apps, it's easy to hide their true feelings.
We wanted to find a better solution using machine learning. Mira uses visual emotion detection and sentiment analysis to determine how they're really feeling.
At the same time, we wanted to use an everyday household object to make it accessible to everyone.
### Treatment
Mira focuses on being engaging and keeping track of their emotional state. She allows them to see their emotional state and history, and then analyze why they're feeling that way using the journal.
## Technical Details
### Alexa
The user's speech is being heard by the Amazon Alexa, which parses the speech and passes it to a backend server. Alexa listens to the user's descriptions of their day, or if they have anything on their mind, and responds with encouraging responses matching the user's speech.
### IBM Watson/Bluemix
The speech from Alexa is being read to IBM Watson which performs sentiment analysis on the speech to see how the user is actively feeling from their text.
### Google App Engine
The backend server is being hosted entirely on Google App Engine. This facilitates the connections with the Google Cloud Vision API and makes deployment easier. We also used Google Datastore to store all of the user's journal messages so they can see their past thoughts.
### Google Vision Machine Learning
We take photos using a camera built into the mirror. The photos are then sent to the Vision ML API, which finds the user's face and gets the user's emotions from each photo. They're then stored directly into Google Datastore which integrates well with Google App Engine
### Data Visualization
Each user can visualize their mental history through a series of graphs. The graphs are each color-coded to certain emotional states (Ex. Red - Anger, Yellow - Joy). They can then follow their emotional states through those time periods and reflect on their actions, or thoughts in the mood journal.
|
losing
|
# Operation Trousers
We're trying to help educate people about shorts, because they're important. The end result us a simple and informative webpage, coupled with beautiful visualizations from real data. Come check it out!

|
## Inspiration'
One of our team members saw two foxes playing outside a small forest. Eager he went closer to record them, but by the time he was there, the foxes were gone. Wishing he could have recorded them or at least gotten a recording from one of the locals, he imagined a digital system in nature. With the help of his team mates, this project grew into a real application and service which could change the landscape of the digital playground.
## What it does
It is a social media and educational application, which it stores the recorded data into a digital geographic tag, which is available for the users of the app to access and playback. Different from other social platforms this application works only if you are at the geographic location where the picture was taken and the footprint was imparted. In the educational part, the application offers overlays of monuments, buildings or historical landscapes, where users could scroll through historical pictures of the exact location they are standing. The images have captions which could be used as instructional and educational and offers the overlay function, for the user to get a realistic experience of the location on a different time.
## How we built it
Lots of hours of no sleep and thousands of git-hubs push and pulls. Seen more red lines this weekend than in years put together. Used API's and tons of trial and errors, experimentation's and absurd humour and jokes to keep us alert.
## Challenges we ran into
The app did not want to behave, the API's would give us false results or like in the case of google vision, which would be inaccurate. Fire-base merging with Android studio would rarely go down without a fight. The pictures we recorded would be horizontal and load horizontal, even if taken in vertical. The GPS location and AR would cause issues with the server, and many more we just don't want to recall...
## Accomplishments that we're proud of
The application is fully functional and has all the basic features we planned it to have since the beginning. We got over a lot of bumps on the road and never gave up. We are proud to see this app demoed at Penn Apps XX.
## What we learned
Fire-base from very little experience, working with GPS services, recording Longitude and Latitude from the pictures we taken to the server, placing digital tags on a spacial digital map, using map box. Working with the painful google vision to analyze our images before being available for service and located on the map.
## What's next for Timelens
Multiple features which we would love to have done at Penn Apps XX but it was unrealistic due to time constraint. New ideas of using the application in wider areas in daily life, not only in education and social networks. Creating an interaction mode between AR and the user to have functionality in augmentation.
|
## Inspiration
There should be an effective way to evaluate company value by examining the individual values of those that make up the company.
## What it does
Simplifies the research process of examining a company by showing it in a dynamic web design that is free-flowing and easy to follow.
## How we built it
It was originally built using a web scraper that scraped from LinkedIn which was written in python. The web visualizer was built using javascript and the VisJS library to have a dynamic view and aesthetically pleasing physics. In order to have a clean display, web components were used.
## Challenges we ran into
Gathering and scraping the data was a big obstacle, had to pattern match using LinkedIn's data
## Accomplishments that we're proud of
It works!!!
## What we learned
Learning to use various libraries and how to setup a website
## What's next for Yeevaluation
Finetuning and reimplementing dynamic node graph, history. Revamping project, considering it was only made in 24 hours.
|
partial
|
## Inspiration
Memes
## What it does
Using AR technology, Wae FindAR helps you find the best route to your destination with Knuckles helping guide you through da wae at each turn.
## How we built it
Knuckles and his footsteps were rendered with Unity and the app was built with Android Studio using Java. The landing page website for our new startup was created using basic HTML, CSS, and Javascript.
## Challenges we ran into
Finding the right idea that would turn us into millionaires, rendering Knuckles in the right places or at all, using the right API's and using them correctly, learning how to make HTTP calls on Android, having to deal with sync and async calls in different threads, getting the AR from Unity to work with the Google Maps application and essentially every aspect of the app was challenging but we pushed through!
## Accomplishments that we're proud of
The map works and we achieved a working AR.
## What we learned
Essentially everything the team did was all new to us.
## What's next for Wae FindAR
Fix up the AR to work with the actual map to get Knuckles as an actual floating avatar through the use of animations and a more distinctive UI. Implementing blockchain in the near very far future and having our ICO in Q2 2018.
|
# bmbot
BM's a user
|
## Inspiration
Open-world AR applications like Pokemon Go that bring AR into everyday life and the outdoors were major inspirations for this project. Additionally, in thinking about how to integrate smartphones with Spectacles, we found inspiration in video games like Phasmophobia's EMP sensors that react more strongly in the presence of ghosts or The Legend of Zelda: Skyward Sword that contained an in-game tracking functionality that pulsates more strongly when facing the direction of and walking closer to a target.
## What it does
This game integrates the Spectacles gear and smartphones together by allowing users to leverage the gyroscopic, haptic, and tactile functionalities of phones to control or receive input about their AR environment. In the game, users have to track down randomly placed treasure chests in their surrounding environment by using their phone as a sensor that begins vibrating when the user is facing a treasure and enters stronger modes of haptic feedback as users get closer to the treasure spots.
These chests come in three types: monetary, puzzle, and challenge. Monetary chests immediately give users in-game rewards. Puzzle chests engage users in a single-player mini-game that may require cognitive or physical activity. Finally, challenge chests similarly engage users in activities not necessarily games, and a stretch goal for multiplayers was that if multiple users were near a spot that another user found a treasure in, the other n users could challenge the treasure finder in a n vs. 1 duel, with the winner(s) taking the rewards.
## How we built it
Once we figured out our direction for the project, we built a user flow architecture in Figma to brainstorm the game design for our application ([link](https://www.figma.com/design/pyG5hlpYkWwVcyvIQJCnY3/Treasure-Hunt-UX-Architecture?node-id=0-1&node-type=canvas&t=clqInR0JpOM6tEnv-0)), and we also visualized how to implement the system for integrating phone haptic feedback with the spectacles depending on distance and directional conditions.
From there, we each took on specific aspects of the user flow architecture to primarily work on: (1) the treasure detection mechanism, (2) spawning the treasure once the user entered within a short distance from the target, and (3) the content of the treasure chests (i.e. rewards or activities). Nearly everything was done using in-house libraries, assets, and the GenAI suite within Snap's Lens Studio.
## Challenges we ran into
As we were working with Spectacles for the first time (compounded with internet problems), we initially encountered technical issues with setting up our development environment and linking the Spectacles for debugging. Due to limited documentation and forums since it is limited-access technology, we had to do a lot of trial-and-error and guessing to figure out how to get our code to work, but luckily, Snap's documentation provided templates to work off of and the Snap staff was able to provide technical assistance to guide us in the right direction. Additionally, given one Spectacle to work with, parallelizing our development work was quite challenging as we had to integrate everything onto one computer while dealing with merge conflicts between our code.
## Accomplishments that we're proud of
In a short span of time, we were able to successfully build a game that provides a unique immersive experience! We've come across and solved errors that didn't have solutions on the internet. For a couple of members of our team, this sparks a newfound interest in the AR space.
## What we learned
This was our first time working with Lens Studio and it's unanimously been a smooth and great software to work with.
For the experienced members on our team, it's been a rewarding experience to make an AR application using JS/TS instead of C# which is the standard language used in Unity.
## What's next for SnapChest
We're excited to push this app forward by adding more locations for treasures, implementing a point system, and also a voice agent integration that provides feedback based on where you're going so you won't get bored on your journey!
If Spectacles would be made available to the general public, a multiplayer functionality would definitely gain a lot of traction and we're looking forward to the future!
|
losing
|
## Inspiration
3D Printing offers quick and easy access to a physical design from a digitized mesh file. Transferring a physical model back into a digitized mesh is much less successful or accessible in a desktop platform. We sought to create our own desktop 3D scanner that could generate high fidelity, colored and textured meshes for 3D printing or including models in computer graphics. The build is named after our good friend Greg who let us borrow his stereocamera for the weekend, enabling this project.
## How we built it
The rig uses a ZED stereocamera driven by a ROS wrapper to take stereo images at various known poses in a spiral which is executed with precision by two stepper motors driving a leadscrew elevator and a turn table for the model to be scanned. We designed the entire build in a high detail CAD using Autodesk Fusion 360, 3D printed L-brackets and mounting hardware to secure the stepper motors to the T-slot aluminum frame we cut at the metal shop at Jacobs Hall. There are also 1/8th wood pieces that were laser cut at Jacobs, including the turn table itself. We designed the power system around an Arduino microcontroller and and an Adafruit motor shield to drive the steppers. The Arduino and the ZED camera are controlled by python over a serial port and a ROS wrapper respectively to automate the process of capturing the images used as an input to OpenMVG/MVS to compute dense point clouds and eventually refined meshes.
## Challenges we ran into
We ran into a few minor mechanical design issues that were unforeseen in the CAD, luckily we had access to a 3D printer throughout the entire weekend and were able to iterate quickly on the tolerancing of some problematic parts. Issues with the AccelStepper library for Arduino used to simultaneously control the velocity and acceleration of 2 stepper motors slowed us down early Sunday evening and we had to extensively read the online documentation to accomplish the control tasks we needed to. Lastly, the complex 3D geometry of our rig (specifically rotation and transformation matrices of the cameras in our defined world coordinate frame) slowed us down and we believe is still problematic as the hackathon comes to a close.
## Accomplishments that we're proud of
We're proud of the mechanical design and fabrication, actuator precision, and data collection automation we achieved in just 36 hours. The outputted point clouds and meshes are still be improved.
|
## How we built it
The sensors consist of the Maxim Pegasus board and any Android phone with our app installed. The two are synchronized at the beginning, and then by moving the "tape" away from the "measure," we can get an accurate measure of distance, even for non-linear surfaces.
## Challenges we ran into
Sometimes high-variance outputs can come out of the sensors we made use of, such as Android gyroscopes. Maintaining an inertial reference frame from our board to the ground as it was rotated proved very difficult and required the use of quaternion rotational transforms. Using the Maxim Pegasus board was difficult as it is a relatively new piece of hardware, and thus, no APIs or libaries have been written for basic functions yet. We had to query for accelerometer and gyro data manually from internal IMU registers with I2C.
## Accomplishments that we're proud of
Full integration with the Maxim board and the flexibility to adapt the software to many different handyman-style use cases, e.g. as a table level, compass, etc. We experimented with and implemented various noise filtering techniques such as Kalman filters and low pass filters to increase the accuracy of our data. In general, working with the Pegasus board involved a lot of low-level read-write operations within internal device registers, so basic tasks like getting accelerometer data became much more complex than we were used to.
## What's next
Other possibilities were listed above, along with the potential to make even better estimates of absolute positioning in space through different statistical algorithms.
|
## Inspiration
We have all been in a situation where we didn’t have access to the internet but needed it to perform something crucial. As outdoor enthusiasts, there are many times when we wish we could use that little bit of entertainment and utility, making our way through the backcountry. Around the world, only 3.3 billion people access the internet via mobile, compared to 5 billion who use SMS services. The lack of consistent internet access has proven to be detrimental sociologically and economically, and we aim to solve a portion of that problem through McAsks.
## What it does
McAsks allows users to ask questions via SMS and receive instantaneous answers. 🤩🤩 These questions can range from finding information 🔍 to directions if you’re lost 🗺️. We’ve also included fun easter eggs like “cat” — get an adorable cat picture — and “cowsay” — a mimic of linux’s iconic and favourite cow. 🐱🐮
## How we built it
We started by prototyping the app in Figma and discussing our vision for McAsks. From there, we separated into our unique roles within the team.
The chatbot application is implemented as a Flask web service. Through webhooks, the application is able to communicate with the Twilio SMS API, facilitating third-party API calls. We made heavy use of the M3O Cloud Platform and its APIs, as well as relying on Geoapify, WeatherAPI, and DuckDuckGo to provide auxiliary features. We also utilized the nltk library for data cleaning and processing.
The accompanying website is built with Next.js, an open-source React framework for building statically generated sites. We used M3O’s DB API as our backend database, experimenting with JAMStack design principles (<https://jamstack.org/>).
💻📱
## Challenges we ran into
Life happens, and unfortunately, many of us could not commit as many hours as we hoped this weekend. That said, we made sure to collaborate efficiently and work hard, and are thrilled to exceed many goals we set for ourselves at the start of the hack! 🥰🥰
## Accomplishments that we're proud of
McAsks itself!! 😍 We can’t wait to see what use cases our users can think of! 😲😲 We are also proud of ourselves for stepping out of our comfort zone to discover and learn new tools and skills, successfully integrating multiple technologies to create a cohesive project. 🛠️🛠️
## What we learned
Alison learned how to use figma! Yay! 🤣 We also picked up more experience working with APIs, particularly that of Twilio! 🎉🎉
## What's next for McAsks
We have many exciting plans for McAsks! First off, we plan to double the number of commands we support, as well as further hone our natural language processing capabilities. You may have noticed the terms “Ask+” and “AskFree” on our website 👀 - yes, we are also planning to introduce a paid tier for those who find the service useful in their lives! We have truly enjoyed our time developing McAsks, and hope to see it continue to grow and mature! 🥳🥳
|
winning
|
# SpeakEasy
## Overview
SpeakEasy: AI Language Companion
Visiting another country but don't want to sound like a robot? Want to learn a new language but can't get your intonation to sound like other people's? SpeakEasy can make you sound like, well, you!
## Features
SpeakEasy is an AI language companion which centers around localizing your own voice into other languages.
If, for example, you wanted to visit another country but didn't want to sound like a robot or Google Translate, you could still talk in your native language. SpeakEasy can then automatically repeat each statement in the target language in exactly the intonation you would have if you spoke that language.
Say you wanted to learn a new language but couldn't quite get your intonation to sound like the source material you were learning from. SpeakEasy is able to provide you phrases in your own voice so you know exactly how your intonation should sound.
## Background
SpeakEasy is the product of a group of four UC Berkeley students. For all of us, this is our first submission to a hackathon and the result of several years of wanting to get together to create something cool together. We are excited to present every part of SpeakEasy; from the remarkably accurate AI speech to just how much we've all learned about rapidly developed software projects.
### Inspiration
Our group started by thinking of ways we could make an impact. We then expanded our search to include using and demonstrating technologies developed by CalHacks' generous sponsors, as we felt this would be a good way to demonstrate how modern technology can be used to help everyday people.
In the end, we decided on SpeakEasy and used Cartesia to realize many of the AI-powered functions of the application. This enabled us to make something which addresses a specific real-world problem (robotic-sounding translations) many of us have either encountered or are attempting to avoid.
### Challenges
Our group has varying levels of software development experience, and especially given our limited hackathon experience (read: none), there were many challenging steps. For example: deciding on project scope, designing high-level architecture, implementing major features, and especially debugging.
What was never a challenge, however, was collaboration. We worked quite well as a team and had a good time doing it.
### Accomplishments / Learning
We are proud to say that despite the many challenges we accomplished a great deal with this project. We have a fully functional Flask backend with React frontend (see "Technical Details") which uses multiple different APIs. This project successfully ties together audio processing, asynchonrous communication, artificial intelligence, UI/UX design, database management, and so much more. What's more is that many of our group members learned this from base fundamentals.
## Technical Details
As mentioned in an earlier section, SpeakEasy is designed with a Flask (Python) backend and React (JavaScript) frontent. This is a very standard setup that is used often at hackathons due to its easy implementation and relatively limited required setup. Flask only requires two lines of code to make an entirely new endpoint, while React can make a full audio-playing page with callbacks that looks absolutely beautiful in less than an hour. For storing data, we use SQLAlchemy (backed by SQLite).
1. When a user opens SpeakEasy, they are first sent to a landing page.
2. After pressing any key, they are taken to a training screen. Here they will record a 15-20 second message (ideally the one shown on screen) which will be used to create an embedding. This is accomplished with the Cartesia "Clone Voice from Clip" endpoint. A Cartesia Voice (abbreviated as "Voice") is created from the returned embedding (using the "Create Voice" endpoint) which contains a Voice ID. This Voice ID is used to uniquely identify each voice, which itself is in a specific language. The database then stores this voice and creates a new user which this voice is associated with.
3. When the recording is complete and the user clicks "Next", they will be taken to a split screen where they can choose between the two main program functions of SpeakEasy.
4. If the user clicks on the vocal translation route, they will be brought to another recording screen. Here, they record a sound in English which is then sent to the backend. The backend encodes this MP3 data into PCM, sends it to a speech-to-text API, and then transfers it into a text translation API. Separately, the backend trains a new Voice (using the Cartesia Localize Voice endpoint, wrapped by get/create Voice since Localize requires an embedding instead of a Voice ID) with the intended target language and uses the Voice ID it returns. The backend then sends the translated text to the Cartesia "Text to Speech (Bytes)" endpoint using this new Voice ID. This is then played back to the user as a response to the original backend request. All created Voices are stored in the database and associated with the current user. This is done so returning users do not have to retrain their voices in any language.
5. If the user clicks on the language learning route, they will be brought to a page which displays a randomly selected phrase in a certain language. It will then query the Cartesia API to pronounce that phrase in that language, using the preexisting Voice ID if available (or prompting to record a new phrase if not). A request is made to the backend to input some microphone input, which is then compared to Cartesia's estimation of your speech in a target language. The backend then returns a set of feedback using the difference between the two pronounciations, and displays that to the user on the frontend.
6. After each route is selected, the user may choose to go back and select either route (the same route again or the other route).
## Cartesia Issues
We were very impressed with Cartesia and its abilities, but noted a few issues which would improve the development experience.
* Clone Voice From Clip endpoint documentation
+ The documentation for the endpoint in question details a `Response` which includes a variety of fields: `id`, `name`, `language`, and more. However, the endpoint only returns the embedding in a dictonary. It is then required to send the embedding into the "Create Voice" endpoint to create an `id` (and other fields), which are required for some further endpoints.
* Clone Voice From Clip endpoint length requirements
+ The clip supplied to the endpoint in question appears to require a duration of greater than a second or two. Se "Error reporting" for further details.
* Text to Speech (Bytes) endpoint output format
+ The TTS endpoint requires an output format be specified. This JSON object notably lacks an `encoding` field in the MP3 configuration which is present for the other formats (raw and WAV). The solution to this is to send an `encoding` field with the value for one of the other two formats, despite this functionally doing nothing.
* Embedding format
+ The embedding is specified as a list of 192 numbers, some of which may be negative. Python's JSON parser does not like the dash symbol and frequently encounters issues with this. If possible, it would be good to either allow this encoding to be base64 encoded, hashed, or something else to prevent negatives. Optimally embeddings do not have negatives, though this seems difficult to realize.
* Response code mismatches
+ Some response codes returned from endpoints do not match their listed function. For example, a response code of 405 should not be returned when there is a formatting error in the request. Similarly, 400 is returned before 404 when using invalid endpoints, making it difficult to debug. There are several other instances of this but we did not collate a list.
* Error reporting
+ If (most) endpoints return in JSON format, errors should also be turned in JSON format. This prevents many parsing issues and would simplify design. In addition, error messages are too vague to glean any useful information. For example, 500 is always "Bad request" regardless of the underlying error cause. This is the same thing as the error name.
## Future Improvements
In the future, it would be interesting to investigate the following:
* Proper authentication
* Cloud-based database storage (with redundancy)
* Increased error checking
* Unit and integration test coverage, with CI/CD
* Automatic recording quality analysis
* Audio streaming (instead of buffering) using WebSockets
* Mobile device compatibility
* Reducing audio processing overhead
|
## Inspiration
Our journey with language learning in class and on apps revealed their limitations—rote memorization and the absence of real-life conversations. Other language learners I met through Toastmasters Club, a public speaking club, echoed the same sentiment. Recognizing the importance of practical speaking skills and interactive learning, we set out to create DialogixAI, a platform that simulates conversational experiences with AI, making language learning more natural, engaging, and effective.
## What It Does
DialogixAI bridges the gap between short phrases or words learned on other language platforms and real-life conversations by enabling users to engage in voice-based conversations with an AI. The AI bot listens, responds, and provides instant feedback on grammar and expression for the user. This real-time feedback enhances speaking skills and boosts confidence.
## How We Built It
Leveraging cutting-edge AI technologies, including large language models (LLMs), DialogixAI processes user speech, generates contextually relevant responses, and evaluates language use for constructive feedback. We utilized open-source libraries and APIs, ensuring robust performance and accelerating development.
## Challenges We Ran Into
Developing an AI that can handle the nuances of human language, including accents, slang, and idiomatic expressions, was challenging. Ensuring the AI's responses feel natural and engaging required intricate integration. Technical hurdles surfaced in seamlessly combining various AI components and ensuring global scalability and accessibility.
## Accomplishments That We're Proud Of
We're proud of our clean UI and conquering the complexities of integrating the AI component into our web app.
## What We Learned
This project deepened our understanding of AI's educational potential. We gained insights into language processing complexities and the importance of user experience design in educational technologies. The journey taught us teamwork, problem-solving, and the iterative design and development process.
## What's Next for DialogixAI
Looking ahead, we aim to expand DialogixAI's capabilities to include more languages and dialects, making it accessible to a broader audience. We also plan to include an AI generated human avatar to simulate the interaction with a human in the future.
|
## Inspiration
There are millions of people around the world who have a physical or learning disability which makes creating visual presentations extremely difficult. They may be visually impaired, suffer from ADHD or have disabilities like Parkinsons. For these people, being unable to create presentations isn’t just a hassle. It’s a barrier to learning, a reason for feeling left out, or a career disadvantage in the workplace. That’s why we created **Pitch.ai.**
## What it does
Pitch.ai is a web app which creates visual presentations for you as you present. Once you open the web app, just start talking! Pitch.ai will listen to what you say and in real-time and generate a slide deck based on the content of your speech, just as if you had a slideshow prepared in advance.
## How we built it
We used a **React** client combined with a **Flask** server to make our API calls. To continuously listen for audio to convert to text, we used a react library called “react-speech-recognition”. Then, we designed an algorithm to detect pauses in the speech in order to separate sentences, which would be sent to the Flask server.
The Flask server would then use multithreading in order to make several API calls simultaneously. Firstly, the **Monkeylearn** API is used to find the most relevant keyword in the sentence. Then, the keyword is sent to **SerpAPI** in order to find an image to add to the presentation. At the same time, an API call is sent to OpenAPI’s GPT-3 in order to generate a caption to put on the slide. The caption, keyword and image of a single slide deck are all combined into an object to be sent back to the client.
## Challenges we ran into
* Learning how to make dynamic websites
* Optimizing audio processing time
* Increasing efficiency of server
## Accomplishments that we're proud of
* Made an aesthetic user interface
* Distributing work efficiently
* Good organization and integration of many APIs
## What we learned
* Multithreading
* How to use continuous audio input
* How to use React hooks, Animations, Figma
## What's next for Pitch.ai
* Faster and more accurate picture, keyword and caption generation
* "Presentation mode”
* Integrate a database to save your generated presentation
* Customizable templates for slide structure, color, etc.
* Build our own web scraping API to find images
|
partial
|
ShopNifty version 1.0 10/01/2017
# ShopNifty
ShopNifty is a web app which simplifies and enhances clothes shopping! Simply enter the url of an image in the main screen search bar to receive visually similar results from popular online clothing stores. With one click of a button, receive dozens of query results that include a picture of the item captioned with the name of the item, price, and url to the specific result.
ShopNifty uses Microsoft Azure's Computer Vision API to process and tag images, and Webhose, a web scraping API, to collect data across multiple sites for items similar to the user's desired input image. Then, the resulting query data is parsed and outputted in a user-friendly format.
<https://drive.google.com/file/d/0B6MxdKMyF-ACMUZLSlFrSkpMRUk/view?usp=sharing>
|
## Inspiration
We were inspired by the daily challenge many people face when deciding what to wear. Whether it’s the desire to dress appropriately for the weather or an event, or simply a need for inspiration, people often waste time and effort in front of their closets. We wanted to make the process easier and more enjoyable by creating an app that not only considers the weather and vibe of the day but also maximizes the use of the clothes people already own.
## What it does
Our app, WINC (Walk-in Closet), generates outfit suggestions from a user’s own closet based on the user’s current weather and mood or vibe for the day. Users upload their clothing items into the app, and WINC curates complete outfits, giving them inspiration and maximizing the utility of their wardrobe. The app ensures that the outfits are not only fashionable but also practical, giving users more confidence in their daily dress choices. It also promotes sustainability by encouraging users to rediscover and restyle the clothing they already have.
## How we built it
* Frontend: We used Next.js with Typescript and styled it with Tailwind CSS to create a fast, responsive, and highly interactive user interface. This allowed us to focus on a smooth user experience across devices.
* Backend: The core of our application is built in Python using Flask, handling user requests, clothing item categorization, and outfit generation logic.
* Data Storage: SingleStore was implemented for its powerful and easy-to-use capabilities to manage our data and, most importantly, perform efficient vector searches to match user-inputted vibe to the clothes in their wardrobe.
* Machine Learning: We used Google Gemini to perform image segmentation and image-to-text analysis, which singles out the clothing from the background, tags items based on categories, and provides intelligent outfit suggestions.
* Weather Integration: OpenWeather was used to integrate real-time weather data, allowing the app to suggest weather-appropriate outfits.
## Challenges we ran into
This was our first time using Next.js with TypeScript, so we faced a learning curve as we developed the project. As a team of four working on both the front end and back end, we encountered challenges in merging the two sides to create a fully functional app. On the backend, we had issues storing images as blobs in the cloud and running them through segmentation, as well as connecting our two databases. We also struggled with getting CORS to work properly on Mac, which added to the complexity of the project.
## Accomplishments that we're proud of
We’re proud of building WINC from the ground up, especially the backend, where we tackled complex tasks like image storage, segmentation, and database integration. Also creating a clean and intuitive UI/UX design, ensuring the app is easy to use and visually appealing. Despite the challenges of learning Next.js and TypeScript for the first time, we worked through these obstacles as a team and delivered a fully functioning app.
## What we learned
It was our first time using Next.js along with TypeScript, so through this project, we learned how to navigate using these frameworks and languages, deepening our understanding of building responsive and performant web applications. For the backend, it was the first time using SingleStore and working with Google Gemini, so we became more comfortable working with databases and, more generally, adapting to new technologies. We also learned the importance of using version control effectively — such as making atomic commits, and git pushing and pulling often so there won’t be millions of conflicts.
## What's next for WINC
Future features of WINC will include the ability to add more items, like shoes, one-pieces, and accessories, to create complete outfits. Within the dashboard, it’ll allow users to create their own outfits and update their Outfit Log. This log will train the AI to suggest more personalized outfits and avoid duplicating looks within an appropriate short timeframe. Users will also be able to save and favorite their outfits, as well as edit tags on their clothing items. There will also be a mobile app that will enable users to easily snap pictures of their clothes directly from their phones.
|
## Inspiration & What it does
You're walking down the road, and see a belle rocking an exquisite one-piece. *"Damn, that would look good on me (or my wife)"*.
You go home and try to look for it: *"beautiful red dress"*. Google gives you 110,000,000 results in 0.54 seconds. Well that helped a lot. You think of checking the fashion websites, but the number of these e-commerce websites makes you refrain from spending more than a few hours. *"This is impossible..."*. You perseverance only lasts so long - you give up.
Fast forward to 2017. We've got everything from Neural Forests to Adversarial Networks.
You go home to look for it: Launch **Dream.it**
You make a chicken-sketch of the dress - you just need to get the curves right. You select the pattern on the dress, a couple of estimates about the dress. **Dream.it** synthesizes elegant dresses based on your sketch. It then gives you search results from different stores based on similar dresses, and an option to get on custom made. You love the internet. You love **Dream.it**. Its a wonderful place to make your life wonderful.
Sketch and search for anything and everything from shoes and bracelets to dresses and jeans: all at your slightest whim. **Dream.it** lets you buy existing products or get a new one custom-made to fit you.
## How we built it
**What the user sees**
**Dream.it** uses a website as the basic entry point into the service, which is run on a **linode server**. It has a chatbot interface, through which users can initially input the kind of garment they are looking for with a few details. The service gives the user examples of possible products using the **Bing Search API**.
The voice recognition for the chatbot is created using the **Bing Speech to Text API**. This is classified using a multiclassifier from **IBM Watson Natural Language Classifier** trained on custom labelled data into the clothing / accessory category. It then opens a custom drawing board for you to sketch the contours of your clothing apparel / accessories / footwear and add color to it.
Once the sketch is finalized, the image is converted to more detailed higher resolution image using [**Pixel Recursive Super Resolution**](https://arxiv.org/pdf/1702.00783.pdf).
We then use **Google's Label Detection Vision ML** and **IBM Watson's Vision** APIs to generate the most relevant tags for the final synthesized design which give additional textual details for the synthesized design.
The tags, in addition to the image itself are used to scour the web for similar dresses available for purchase
**Behind the scenes**
We used a **Deep Convolutional Generative Adversarial Network (GAN)** which runs using **Theano** and **cuDNN** on **CUDA**. This is connected to our web service through websockets. The brush strokes from the drawing pad on the website get sent to the **GAN** algorithm, which sends back the synthesized fashion design to match the user's sketch.
## Challenges we ran into
* Piping all the APIs together to create a seamless user experience. It took a long time to optimize the data (*mpeg1*) we were sending over the websocket to prevent lags and bugs.
* Running the Machine learning algorithm asynchronously on the GPU using CUDA.
* Generating a high-quality image of the synthesized design.
* Customizing **Fabric.js** to send data appropriately formatted to be processed by the machine learning algorithm.
## Accomplishments that we're proud of
* We reverse engineered the **Bing real-time Speech Recognition API** to create a Node.js library. We also added support for **partial audio frame streaming for voice recognition**.
* We applied transfer learning from Deep Convolutional Generative Adversarial Networks and implemented constraints on its gradients and weights to customize user inputs for synthesis of fashion designs.
* Creating a **Python-Node.js** stack which works asynchronously with our machine learning pipeline
## What we learned
This was a multi-faceted educational experience for all of us in different ways. Overall:
* We learnt to asynchronously run machine learning algorithms without threading issues.
* Setting up API calls and other infrastructure for the app to run on.
* Using the IBM Watson APIs for speech recognition and label detection for images.
* Setting up a website domain, web server, hosting a website, deploying code to a server, connecting using web-sockets.
* Using pip, npm; Using Node.js for development; Customizing fabric.js to send us custom data for image generation.
* Explored machine learning tools learnt how to utlize them most efficiently.
* Setting up CUDA, cuDNN, and Theano on an Ubuntu platform to use with ML algorithm.
## What's next for Dream.it
Dream.it currently is capable of generating shoes, shirts, pants, and handbags from user sketches. We'd like to expand our training set of images and language processing to support a greater variety of clothing, materials, and other accessories.
We'd like to switch to a server with GPU support to run the cuDNN-based algorithm on CUDA.
The next developmental step for Dream.it is to connect it to a 3D fabric printer which can print the designs instantly without needing the design to be sent to manufacturers. This can be supported at particular facilities in different parts of the country to enable us to be in control of the entire process.
|
losing
|
## Inspiration
I love videogames. There are so many things that we can't do in the real world because we are limited to the laws of physics. There are so many scenarios that would be too horrible to put ourselves in if it were the real world. But in the virtual world of videogames, you can make the impossible happen quite easily. But beyond that, they're just fun! Who doesn't enjoy some stress-relief from working hard at school to go and game with your friends? Especially now with COVID restrictions, videogames are a way for people to be interconnected and to have fun with each other without worrying about catching a deadly disease.
## What it does
The Streets of Edith Finch is a first-person shooter, battle royale style game built with the impressive graphics of Unreal Engine 4. Players are spawned into the unique level design where they can duke it out to be the last man/woman standing.
## How I built it
Using Unreal Engine 4 to simulate the physics and effects and develop the frameworks for actors. Textures are community-based from the Epic Games Community. Functionality, modes, and game rules were built in C++ and Blueprints (Kesmit) and developed directly in the engine's source code.
## Challenges I ran into
Unreal Engine has A LOT of modules and classes so navigation was definitely not easy especially since this my first time working with it. Furthermore, Unreal engine introduces a lot of Unreal specific syntaxes that do not follow traditional C++ syntax so that was also a learning curve. Furthermore, simulating the physics behind ragdolls and pushing over certain entities was also difficult to adjust.
## Accomplishments that I'm proud of
The fact that this is actually playable! Was not expecting the game to work out as well as it did given the limited experience and lack of manpower being a solo group.
## What I learned
I learned that game development on it's own is a whole other beast. The coding is merely a component of it. I had to consider textures and shadow rendering, animations, physics, and playability all on top of managing module cohesion and information hiding in the actual code.
## What's next for The Streets of Edith Finch
Make level design much larger - not enough time this time around. This will allow for support for more players (level is small so only about 2-3 players before it gets too hectic). Furthermore, spawn points need to be fixed as some players will spawn at same point. Crouching and sprinting animations need to be implemented as well as ADSing. Finally, player models are currently missing textures as I couldn't find any good ones in the community right now that weren't >$100 lol.
|
## What it does
ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware.
## How we built it
We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya.
## Challenges we ran into
The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons.
## Accomplishments that we're proud of
We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction.
## What we learned
How to optimize for a certain platform - in terms of UI, geometry, textures and interaction.
## What's next for ColoVR
Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time.
|
## Inspiration
This project was inspired by games which aim to make computer-programming concepts accessible to young children. Unlike past games, however, we used Augmented Reality to bring the user's instructions to life. By introducing younger people to code in a simple way, free from complex syntax, we hope to inspire people who would otherwise never be exposed to coding and to encourage them to consider a career in the software development industry.
## What it does
The game uses the user's camera to create an augmented reality playing field, consisting of a car and a series of obstacles (walls). Users can use predefined functions to create a sequence of instructions which are then executed by an augmented reality car. The goal is to navigate the car through increasingly complex levels without hitting any walls. In addition, moves can be strung together and repeated using loop structures to reduce repeated work. The sequence of instructions is meant to introduce users to some fundamental programming concepts like line-by-line execution of code, using for loops, and calling functions.
## How we built it
We used Google's ARCore and Unity to design the game, with code written in C#.
## Challenges we ran into
AR collision detection (solved), creating an interactive move list (in progress), limited examples of ARcore.
## Accomplishments that we're proud of
Building a working AR game as a team of two who had never done AR development before.
## What we learned
Neither of us had used ARcore or Unity before this weekend. We were both new to Augmented Reality and mostly new to game development.
## What's next for Code[cAR]
Clean up the UI, add more levels and movement possibilities (e.g. jumping), improve AR object placement. We also want to add a "three-star" system for levels, where to get the maximum score users have to beat the level using the most efficient code possible.
|
partial
|
Have you ever wanted to check the weather quickly? Wondered if you should grab an umbrella or put on some sunscreen for the day? Weather to Bring it lets you know in a quick and effective manner 'weather' you need to plan ahead. Displaying the weather with a minimalistic theme, with only the necessary details you need to answer the pressing question: "Should I bring an extra layer of clothing?".
Our team wanted to learn how to use React since none of us had worked with it before, and the Weather Network API sounded fun to work with. We also love MSPaint.
|
## Inspiration
Everyone is bound to run into the same problem: you can't decide what to wear! Plus, the weather is constantly changing, meaning you can never comfortably weather the same outfits all year round. Our team seeks to change that by creating an application that inspires people to look into new styles based on the local weather.
## What it does
WeatherWear is a weather forecast application that helps the user to "Get Inspired!" on outfit ideas based on the current day's forecast.
## How we built it
We used the OpenWeather API to collect data about current and future forecasts at the user’s location. When the user opens the application, they are prompted with a message asking for permission to use their location. Upon granting access, their longitude and latitude are used to pinpoint their location. The API then provides the program with the forecast data of their region in 3-hour intervals, for the next 5 days. From here, the weather temperature in Kelvin is converted into celsius (which can also be changed to Fahrenheit). Then, the is whether ID is given found in order to present a corresponding icon based on weather conditions (e.g. cloudy, snowing, rainy, etc.). Finally, the data is then analyzed to present to the user appropriate outfit ideas.
## Challenges we ran into
The main challenge we ran into was displaying the proper data from the API to our webpage. The API that we used gave us a wider quantity of information than needed. To combat this, we experimented with different methods in order to take exclusively the key pieces of information.
## Accomplishments that we're proud of
As a team, we can proudly say that learning how to use an API for the first time was a significant accomplishment for our project. In addition, we were also quite satisfied with how the aesthetics and design of the application had turned out.
## What we learned
Going into this hackathon, all of our team members had only had experience with the basics of JavaScript, HTML, and CSS. As such, a majority of the code was learned whilst developing the application. Furthermore, we learned what APIs are and how to integrate them using JavaScript.
## What's next for Working Weather App
Our next step to improve WeatherWear is to get the images through a browsing tool. We would use a separate tool in order to browse sites like Pinterest and gather a greater range of content. Theoretically, the user would be able to continuously refresh the page for their next piece of outfit inspiration.
|
### 💭 Inspiration
We realized the need for an on-demand snow removal service after helping an elderly neighbor clean her steps and driveway. The idea came to us as we saw the struggle and frustration she was having due to the overwhelming amount of snow that hadn't been cleaned which was preventing her from leaving her house.
### 📱 What it does
Our web app provides a streamlined and efficient method for addressing snow removal needs for driveways, sidewalks, steps, cars, and other areas surrounding one's property. It is particularly advantageous for people who simply just don't have time, older adults, those with disabilities, or those incapable of removing snow themselves. Additionally, it serves as a dependable alternative during instances of high demand and snow removal truck delays. The on-demand nature of our service offers convenience, adaptability, and cost-efficiency to our clients, addressing their snow removal needs in a timely manner.
### 🎯 Accomplishments that we’re proud of
In just 24 hours, we managed to build up an idea into a fully functional demo that we're extremely proud of. Despite encountering obstacles, and not being able to add every feature we wanted to, we had a blast working together as a team, we had great communication and planning skills which effectively helped us find solutions when blocked, and ultimately learned the value of teamwork.
### 🚧 Challenges we ran into
* Adding a payment processing system
* Integrating the Mapbox API
* Designing UI/UX that flows nicely with our idea
### 💡 What we learned
We learned that sleep is important, and we also learned that teamwork is extremely valuable to complete a hackathon within 24 hours. Team problem-solving and task-assigning were the reasons we were able to complete our app.
### 🔮 What’s next for Snowmate
In the future, we plan on improving our Web app by integrating helpful features including weather alerts, which would send notifications to users when snow is forecasted. This will help them to take action in advance and avoid getting stuck in the snow. We would also like to include customizable service options, which would offer a range of service options and the ability for users to customize their service based on their specific needs.
### 🔨 Built with
NodeJS and Typescript for the Backend, React Framework as Frontend with Typescript, hosted on Google Cloud.
|
losing
|
## Inspiration
Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on.
## What it does
StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with!
## How we built it
We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort.
## Challenges we ran into
The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle.
## Accomplishments that we're proud of
That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features.
## What we learned
A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product.
## What's next for StickyAR
StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color.
|
## Inspiration
We wanted to make the interactions with our computers more intuitive while giving people with special needs more options to navigate in the digital world. With the digital landscape around us evolving, we got inspired by scenes in movies featuring Tony Stark, where he interacts with computers within his high-tech office. Instead of using a mouse and computer, he uses hand gestures and his voice to control his work environment.
## What it does
Instead of a mouse, Input/Output Artificial Intelligence, or I/OAI, uses a user's webcam to move their cursor to where their face OR hand is pointing towards through machine learning.
Additionally, I/OAI allows users to map their preferred hand movements for commands such as "click", "minimize", "open applications", "navigate websites", and more!
I/OAI also allows users to input data using their voice, so they don't need to use a keyboard and mouse. This increases accessbility for those who don't readily have access to these peripherals.
## How we built it
Face tracker -> Dlib
Hand tracker -> Mediapipe
Voice Recognition -> Google Cloud
Graphical User Interface -> tkinter
Mouse and Keyboard Simulation -> pyautogui
## Challenges we ran into
Running this many programs at the same time slows it down considerably, we therefore need to selectively choose which ones we wanted to keep during the implementation. We solved this by using multithreading and carefully investigating efficiency.
We also had a hard time mapping the face because of the angles of rotation of the head, increasing the complexity of the matching algorithm.
## Accomplishments we're proud of
We were able to implement everything we set out to do in a short amount of time, as there was a lot of integrations with multiple frameworks and our own algorithms.
## What we learned
How to use multithreading for multiple trackers, using openCV for easy camera frames, tkinter GUI building and pyautogui for automation.
## What's next for I/OAI
We need to figure a way to incorporate features more efficiently or get a supercomputer like Tony Stark!
By improving on the features, people will have more accessbility at their computers by simply downloading a program instead of buying expensive products like an eyetracker.
|
## Inspiration
As we have seen through our university careers, there are students who suffer from disabilities who can benefit greatly from accessing high-quality lecture notes. Many professors struggle to find note-takers for their courses which leaves these students with a great disadvantage. Our mission is to ensure that their notes increase in quality, thereby improving their learning experiences - STONKS!
## What it does
This service automatically creates and updates a Google Doc with text-based notes derived from the professor's live handwritten lecture content.
## How we built it
We used Google Cloud Vision, OpenCV, a camera, a Raspberry-Pi, and Google Docs APIs to build a product using Python, which is able to convert handwritten notes to text-based online notes.
At first, we used a webcam to capture an image of the handwritten notes. This image was then parsed by Google Cloud Vision API to detect various characters which were then transcripted into text-based words in a new text file. This text file was then read to collect the data and then sent to a new Google Doc which is dynamically updated as the professor continues to write their notes.
## Challenges we ran into
One of the major challenges that we faced was strategically dividing tasks amongst the team members in accordance with each individuals' expertise. With time, we were able to assess each others' skills and divide work accordingly to achieve our goal.
Another challenge that we faced was that the supplies we originally requested were out of stock (Raspberry-Pi camera) however, we were able to improvise by getting a camera from a different kit.
One of the major technical challenges we had to overcome was receiving permissions for the utilization of Google Docs APIs to create and get access to a new document. This was overcome by researching, testing and debugging our code to finally get authorization for the API to create a new document using an individual's email.
## Accomplishments that we are proud of
The main goal of STONKS was accomplished as we were able to create a product that will help disabled students to optimize their learning through the provision of quality notes.
## What we learned
We learned how to utilize Google Cloud Vision and OpenCV which are both extremely useful and powerful computer vision systems that use machine learning.
## What's next for STONKS?
The next step for STONKS is distinguishing between handwritten texts and visual representations such as drawings, charts, and schematics. Moreover, we are hoping to implement a math-based character recognition set to be able to recognize handwritten mathematical equations.
|
winning
|
## Inspiration
A playlist that can make everyone happy? Can that even exist? Well since music is subjective all you have to do is play the right song and not the best one, well that's where equalist comes in. The goal was to have an app that made the best collaborative music playlist among friends without them having to enter their favorite songs and artists. Thus making every road trip more about having fun and less about who gets to control the radio.
## What it does
Allows Spotify users to create a group playlist that allows all of their music tastes to be represented equally and yet still be enjoyable for the group. all of this can be done in 3 steps and way less time than it takes to collaborate manually.
## How we built it
Since time was of the essence, we had to use flutter web because it gave us the ability to quickly make the UI as well as make the product into a PWA allowing equalist to be installed in any platform(ios, android, PC, and mac). The frontend is deployed on netllify. The backend is built with fastapi which uses MongoDB as the database.
## Challenges we ran into
The main challenge was flutter web as it's a new technology and not all the stand libraries and protocols are readily available but we still managed to make it in time. Another hassle was the Spotify API because it was our first time working with oauth.
## Accomplishments that we're proud of
A huge accomplishment was just getting the product we initially planned of, usually with hackathon we end up doing only 20 % of what we planned but this time we did a lot more.
## What we learned
Although flutter web is still in its early stages, it has great potential of becoming the number one thing to develop in and we are glad early adopters of it.
## What's next for Equalist
Asynchronous invites where the playlist could be continuously be modified even after a playlist was created as the invite link was still active. This could be super useful for large parties. This is a feature we intend to implement soon.
We also would like to publish this as a discord bot.
|
## Inspiration 💡
Our project was inspired by the warm experiences of the past, where music played on boomboxes brought people together in shared moments of joy. We aimed to recapture this sense of nostalgia, blending the charm and simplicity of the OG music gatherings with the inclusivity and innovation of modern technology. By reviving the spirit of the classic boombox, our project seeks to rekindle the collective joy of music in a way that resonates with both past and present generations.
## What it does 🤝
Our revamped experience redefines the music experience in groups by integrating modern technology. It democratizes music selection through a voting system, allowing participants to collectively choose songs via an intuitive, user-friendly interface. This fusion of nostalgic charm and contemporary functionality creates a unique, inclusive environment where everyone's musical taste has a chance to shine where everyone’s music taste is given a voice instead of an individual.
## How we built it 👷
We built our system by using the Spotify API via the Spotipy Python package, enabling us to access a vast music library and use Spotify's recommendation algorithms. As any DJ can tell you, every crowd is different. We fine-tuned the recommendation parameters after each communal vote to ensure the music selection remained dynamic and reflective of the current session’s taste. The backend, developed with Flask, efficiently managed the voting system and user interactions, while the Next.js frontend provided a seamless, nostalgic yet modern user experience, making music selection engaging and intuitive for all participants.
## Challenges we ran into 🚧
Unfortunately, we got rate limited by the Spotify API more than once which led to us having delays in bug-fixing, and ultimately a non-functional product.
## Accomplishments that we're proud of 🍾
We're proud of how seamlessly we integrated diverse technologies to recreate the communal spirit of the classic boombox with our own modern twist. Our successful implementation of the Spotify API for dynamic music recommendations, coupled with the Flask backend for real-time voting, taught us so much technically. Additionally, crafting an intuitive and nostalgically themed user interface with Next.js taught us a lot about how to appeal to our audience in the best way possible. We loved using voting to add something we thought was innovative and different that we haven’t seen before.
## What we learned 🎓
The most important lesson we learned is to be wary of third party libraries before using them. Several issues have been reported with Spotipy improperly handling errors and spamming the Spotify API, causing a rate limit. If we knew this before using the library, we would have written HTTPS requests ourselves, leaving all behavior in our control.
## What's next for DemocracyDJ 🚀
In light of the challenges we faced, DemocracyDJ would greatly benefit from a rewrite and more extensive testing. Further, We’d need more lenient access to the Spotify API or some other way to access music. This can be through another service entirely or manipulating files directly on the host machine
|
## 💫 Inspiration
Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours.
We present to you.... **Locall!**
## 🏘 What it does
Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours!
For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours.
## 🛠 How we built it
We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase
## 🦒 What we learned
Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app!
## 📱 What's next for Locall
* We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience
* Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
|
losing
|
## Inspiration
Globally, over 92 million tons of textile waste are generated annually, contributing to overflowing landfills and environmental degradation. What's more, the fashion industry is responsible for 10% of global carbon emissions, with fast fashion being a significant contributor due to its rapid production cycles and disposal of unsold items. The inspiration behind our project, ReStyle, is rooted in the urgent need to address the environmental impact of fast fashion. Witnessing the alarming levels of clothing waste and carbon emissions prompted our team to develop a solution that empowers individuals to make sustainable choices effortlessly. We believe in reshaping the future of fashion by promoting a circular economy and encouraging responsible consumer behaviour.
## What it does
ReStyle is a revolutionary platform that leverages AI matching to transform how people buy and sell pre-loved clothing items. The platform simplifies the selling process for users, incentivizing them to resell rather than contribute to the environmental crisis of clothing ending up in landfills. Our advanced AI matching algorithm analyzes user preferences, creating tailored recommendations for buyers and ensuring a seamless connection between sellers and buyers.
## How we built it
We used React Native and Expo to build the front end, creating different screens and components for the clothing matching, camera, and user profile functionality. The backend functionality was made possible using Firebase and the OpenAI API. Each user's style preferences are saved in a Firebase Realtime Database, as are the style descriptions for each piece of clothing, and when a user takes a picture of a piece of clothing, the OpenAI API is called to generate a description for that piece of clothing, and this description is saved to the DB. When the user is on the home page, they will see the top pieces of clothing that match with their style, retrieved from the DB and the matches generated using the OpenAI API.
## Challenges we ran into
* Our entire team was new to the technologies we utilized.
* This included React Native, Expo, Firebase, OpenAI.
## Accomplishments that we're proud of
* Efficient and even work distribution between all team members
* A visually aesthetic and accurate and working application!
## What we learned
* React Native
* Expo
* Firebase
* OpenAI
## What's next for ReStyle
Continuously refine our AI matching algorithm, incorporating machine learning advancements to provide even more accurate and personalized recommendations for users, enabling users to save clothing that they are interested in.
|
## Inspiration & What it does
You're walking down the road, and see a belle rocking an exquisite one-piece. *"Damn, that would look good on me (or my wife)"*.
You go home and try to look for it: *"beautiful red dress"*. Google gives you 110,000,000 results in 0.54 seconds. Well that helped a lot. You think of checking the fashion websites, but the number of these e-commerce websites makes you refrain from spending more than a few hours. *"This is impossible..."*. You perseverance only lasts so long - you give up.
Fast forward to 2017. We've got everything from Neural Forests to Adversarial Networks.
You go home to look for it: Launch **Dream.it**
You make a chicken-sketch of the dress - you just need to get the curves right. You select the pattern on the dress, a couple of estimates about the dress. **Dream.it** synthesizes elegant dresses based on your sketch. It then gives you search results from different stores based on similar dresses, and an option to get on custom made. You love the internet. You love **Dream.it**. Its a wonderful place to make your life wonderful.
Sketch and search for anything and everything from shoes and bracelets to dresses and jeans: all at your slightest whim. **Dream.it** lets you buy existing products or get a new one custom-made to fit you.
## How we built it
**What the user sees**
**Dream.it** uses a website as the basic entry point into the service, which is run on a **linode server**. It has a chatbot interface, through which users can initially input the kind of garment they are looking for with a few details. The service gives the user examples of possible products using the **Bing Search API**.
The voice recognition for the chatbot is created using the **Bing Speech to Text API**. This is classified using a multiclassifier from **IBM Watson Natural Language Classifier** trained on custom labelled data into the clothing / accessory category. It then opens a custom drawing board for you to sketch the contours of your clothing apparel / accessories / footwear and add color to it.
Once the sketch is finalized, the image is converted to more detailed higher resolution image using [**Pixel Recursive Super Resolution**](https://arxiv.org/pdf/1702.00783.pdf).
We then use **Google's Label Detection Vision ML** and **IBM Watson's Vision** APIs to generate the most relevant tags for the final synthesized design which give additional textual details for the synthesized design.
The tags, in addition to the image itself are used to scour the web for similar dresses available for purchase
**Behind the scenes**
We used a **Deep Convolutional Generative Adversarial Network (GAN)** which runs using **Theano** and **cuDNN** on **CUDA**. This is connected to our web service through websockets. The brush strokes from the drawing pad on the website get sent to the **GAN** algorithm, which sends back the synthesized fashion design to match the user's sketch.
## Challenges we ran into
* Piping all the APIs together to create a seamless user experience. It took a long time to optimize the data (*mpeg1*) we were sending over the websocket to prevent lags and bugs.
* Running the Machine learning algorithm asynchronously on the GPU using CUDA.
* Generating a high-quality image of the synthesized design.
* Customizing **Fabric.js** to send data appropriately formatted to be processed by the machine learning algorithm.
## Accomplishments that we're proud of
* We reverse engineered the **Bing real-time Speech Recognition API** to create a Node.js library. We also added support for **partial audio frame streaming for voice recognition**.
* We applied transfer learning from Deep Convolutional Generative Adversarial Networks and implemented constraints on its gradients and weights to customize user inputs for synthesis of fashion designs.
* Creating a **Python-Node.js** stack which works asynchronously with our machine learning pipeline
## What we learned
This was a multi-faceted educational experience for all of us in different ways. Overall:
* We learnt to asynchronously run machine learning algorithms without threading issues.
* Setting up API calls and other infrastructure for the app to run on.
* Using the IBM Watson APIs for speech recognition and label detection for images.
* Setting up a website domain, web server, hosting a website, deploying code to a server, connecting using web-sockets.
* Using pip, npm; Using Node.js for development; Customizing fabric.js to send us custom data for image generation.
* Explored machine learning tools learnt how to utlize them most efficiently.
* Setting up CUDA, cuDNN, and Theano on an Ubuntu platform to use with ML algorithm.
## What's next for Dream.it
Dream.it currently is capable of generating shoes, shirts, pants, and handbags from user sketches. We'd like to expand our training set of images and language processing to support a greater variety of clothing, materials, and other accessories.
We'd like to switch to a server with GPU support to run the cuDNN-based algorithm on CUDA.
The next developmental step for Dream.it is to connect it to a 3D fabric printer which can print the designs instantly without needing the design to be sent to manufacturers. This can be supported at particular facilities in different parts of the country to enable us to be in control of the entire process.
|
## Inspiration
When we started hearing that some of our teachers and friends had stopped reading the news because it was too depressing, we knew there was a problem. There had to be a way to stay informed while staying positive.
It has been proven that stressful news can harm both your mental and physical health. It can lead to anxiety, depression, fatigue, gut problems and more. Your body will also release stress hormones such as cortisol and adrenaline. Yet reading the news has become a part of everyone's daily routine. Can you imagine living without it?
That's why we created Sanguine. A website that allows you to specify the news that you want to see, and Sanguine will filter out stories so you can see positive, customized news from your usual news provider.
## What it does
On Sanguine, users will be able to filter their news by answering a short survey to get more specific and positive news. Based on the optimism ranking and preference, users will get different news stories that interest them.
## How we built it
We used Figma to create a visual representation of Sanguine. Using Python and the co:here API, we programmed the basic backend of the website. Our skills and time were limited, but we were able to make a code that retrieves news articles using News API and prints out specific up to date articles based on what the user inputs.
## Challenges we ran into
The biggest challenge was finding team members. Because we both have limited coding experience (Python), we needed to find members who had skills in javascript and understand web development.
## Accomplishments that we're proud of
We’re proud of being able to take our idea and build a functioning prototype within a day with a team of just two people.
## What we learned
During Hack the North, we learned a lot about coding in Python using co:here. We never even heard of co:here before coming to Hack the North, so being able to use it to create a functional program within a day was amazing. Our mentors guided us through the project, and helped us learn how to use the co:here API. We also learned what new skills we needed to develop before our next hackathon. Most important of all, we learned how much can be accomplished in two days and very little sleep.
## What's next for Sanguine: Positive News Stories
The next step is to turn our wireframe prototype into a working website. When we have more time, we plan to do this by using Django and React. We also plan to involve our readers more by allowing them to rank the suggested news sites. This will help improve the AI results and get users more engaged on our platform. The next version will have functionality that allows users to customize the type of news they receive. For example: 60% Science, 10% World News, 20% Sports, 10% Politics.
|
partial
|
We all believe in harnessing the power of data to help people make informed decisions. One of the biggest decisions that any American makes is where to live or which home to buy. We sought to use real estate data made available through Nasdaq datasets to make this decision easier and better informed.
## What it does
Given any zip code, our web application shows the desired data and has interactive charts to display housing data for a given zip code and time frame. It also provides links to information on local schools in the community.
## How we built it
We used Python/Flask for the backend, and wrote simple HTML frontend to display the form and graphs. We used web scraping to build the link that directs users to the top schools in the area.
## Challenges we ran into
There were a few challenges. The first was getting the graph to display -- there was an issue with the mpld3 python module, and we had to manually change the library to render the graph correctly. We also ran into challenges in how to collect information about local schools.
## What we learned
* Real estate data is a valuable resource for making crucial decisions about where to live.
* Having the right data at everyone's disposal can help people find affordable areas to live.
* How to display time series graphs in an interactive way.
* How to use large datasets like the Nasdaq datasets and visualize them in an easily understandable way.
|
## Inspiration
My friend and I needed to find an apartment in New York City during the Summer. We found it very difficult to look through multiple listing pages at once so we thought to make a bot to suggest apartments would be helpful. However, we did not stop there. We realized that we could also use Machine Learning so the bot would learn what we like and suggest better apartments. That is why we decided to do RealtyAI
## What it does
It is a facebook messenger bot that allows people to search through airbnb listings while learning what each user wants. By giving feedback to the bot, we learn your **general style** and thus we are able to recommend the apartments that you are going to like, under your budget, in any city of the world :) We can also book the apartment for you.
## How I built it
Our app used a flask app as a backend and facebook messenger to communicate with the user. The facebook bot was powered by api.ai and the ML was done on the backend with sklearn's Naive Bayes Classifier.
## Challenges I ran into
Our biggest challenge was using python's sql orm to store our data. In general, integrating the many libraries we used was quite challenging.
The next challenge we faced was time, our application was slow and timing out on multiple requests. So we implemented an in-memory cache of all the requests but most importantly we modified the design of the code to make it multi-threaded.
## Accomplishments that I'm proud of
Our workflow was very effective. Using Heroku, every commit to master immediately deployed on the server saving us a lot of time. In addition, we all managed the repo well and had few merge conflicts. We all used a shared database on AWS RDS which saved us a lot of database scheme migration nightmares.
## What I learned
We learned how to use python in depth with integration with MySQL and Sklearn. We also discovered how to spawn a database with AWS. We also learned how to save classifiers to the database and reload them.
## What's next for Virtual Real Estate Agent
If we win hopefully someone will invest! Can be used by companies for automatic accommodations for people having interviews. But only by individuals how just want to find the best apartment for their own style!
|
## Inspiration
After being overwhelmed by the volume of financial educational tools available, we discovered how the majority of products are focused for institutions or expensive. We decided there needs to be an easy approach to learning about stocks in a more casual environment. Interested in the simplicity of Tinder’s yes or no swiping mechanics, we decided to combine the 2 ideas to create Tickr!
## What it does
Tickr is a stock screening tool designed to help beginner retail investors discover their next trade! Using an intuitive yes or no discovery system through swiping mechanics, Tickr the next Tinder for stocks. For a more in depth video demo, see our [original screen recorded demo video!](https://youtu.be/dU6rF8vymKE)
## How we built it
Our team created this web app using a Node and Express back end paired with a React front end. The back end of our project used 3 linked Supabase tables to host authenticated user information, static information about stocks from the New York stock exchange and NASDAQ. We also used the [Finnhub API](https://finnhub.io/) to get real time metrics about the stocks we were showing our users.
## Challenges we ran into
Our biggest challenge was setting the scope into something that our team could complete in a weekend. We hadn't used Node and Express in a long time, so getting comfortable with our stack again took more time than we thought.
We were also completely new to Supabase and decided to try it out because it sounded really interesting. While Supabase turned out to be incredibly useful and userfriendly, the learning curve for it also took a bit more time than we thought.
## Accomplishments that we're proud of
The two accomplishments we are most proud of are our finished UI and successful integration of the Finnhub API. Drawing inspiration from Tinder, we were able to recreate a similar UI/UX design with minimal help from pre-existing libraries. Further, we were able to design our backend to make seamless API calls to fetch relevant data for our application.
## What we learned
During this project we learned a lot about the power of friendship and anime. Some of us learned what a market cap was and how to write a viable business proposal while others learned more about full stack development and how to host a database on Supabase.
Overall it was a very fun project and we're really glad we were able to get our MVP done 😁✌️
## What's next for Tickr
Our next goal for Tickr is to finish off the aggregate news feed function. This would entail a news feed of all stocks swiped on and provide notification. This would help improve our north star metric of time spent on platform and daily active users!
|
partial
|
## Inspiration
We built this project in order to give people up to date and easy to read information on the spread of covid so that they could be informed and take caution.
## What it does
Shows users COVID case infomation in each province with case number severity represented by colors.
## How we built it
We built the frontend with TypeScript and the backend which fetches and manipulates data with Python.
## Accomplishments that we're proud of
Winning AlphaStart 2021 and creating a useful solution to an important problem.
|
## Inspiration
I was interested in exploring the health datasets given by John Snow Labs in order to give users the ability to explore meaningful datasets. The datasets selected were Vaccination Data Immunization Kindergarten Students 2011 to 2014, Mammography Data from Breast Cancer Surveillance Consortium, 2014 State Occupational Employment and Wage Estimate dataset from the Bureau of Labor Statistics, and Mental Health Data from the CDC and Behavioral Risk Factor Surveillance System.
Vaccinations are crucial to ending health diseases as well as deter mortality and morbidity rates and has the potential to save future generations from serious disease. By visualization the dataset, users are able to better understand the current state of vaccinations and help to create policies to improve struggling states. Mammography is equally important in preventing health risks. Mental health is an important fact in determining the well-being of a state. Similarly, the visualization allows users to better understand correlations between preventative steps and cancerous outcomes.
## What it does
The data visualization allows users to observe possible impacts of preventative steps on breast cancer formation and the current state of immunizations for kindergarten students and mental health in the US. Using this data, we can analyze specific state and national trends and look at interesting relationships they may have on one another.
## How I built it
The web application's backend used node and express. The data visualizations and data processing used d3. Specific d3 packages allowed to map and spatial visualizations using network/node analysis. D3 allowed for interactivity between the user and visualization, which allows for more sophisticated exploration of the datasets.
## Challenges I ran into
Searching through the John Snow Labs datasets required a lot of time. Further processing and finding the best way to visualize the data took much of my time as some datasets included over 40,000 entries! Working d3 also took awhile to understand.
## Accomplishments that I'm proud of
In the end, I created a working prototype that visualizes significant data that may help a user understand a complex dataset. I learned a lot more about d3 and building effective data visualizations in a very constrained amount of time.
## What I learned
I learned a lot more about d3 and building effective data visualizations in a very constrained amount of time.
## What's next for Finder
I hope to add more interaction for users, such as allowing them to upload their own dataset to explore their data.
|
# Link to deployed web-portal
<https://fakenewscalculator.herokuapp.com/webportal.html>
## Inspiration
Our patented B2C web client offers a never-before-seen 'covfefe index' designed to standardize a scoring system in order to evaluate the trustworthiness of modern news outlets, disrupting the fintech space. There currently is no standard in objective news evaluation, and thus we saw an opportunity to create a completely un-biased revolutionary RGB blockchain machine learning watercooled big data algorithm using the coveted and up and coming Bing search engine.
## What it does
Upon the input of a news article, we provide a global 'covfefe index' of 0 <= x <= 100, where x depicts the unreliability of the index based on a plethora of scores calculated using the Bing Web Search API, and the number of instances of the word "Trump" in the article.
## How we built it
We used the python flask library to build our back-end. We then used HTML, CSS, and Javascript to build our custom one-of-a-kind web-c.
## Challenges we ran into
Installing python libraries.
## Accomplishments that we're proud of
We are proud of delivering a trust-worthy API for our custom-designed covfefe index.
## What we learned
How to HackWestern.
## What's next for The Snews Button
Decide on a name for this (ie. "The Snews Button", "Covfefe Index", "Fake News Calculator")
<https://www.strawpoll.me/16913815>
vote here.
|
partial
|
## Origins
Inspired by our own struggles of finding and buying cheap airfare, while maintaining reasonable constraints. Flights are still a hassle to find and purchase, even with tools such as Google Flights, Kayak, and Skiplagged. All travel apps use the same sets of input from users: destination, dates, and times. We've taken it a step further and catered flights to find your next trip by using a custom algorithm that analyzes your social media. Flights come paired with hotels as a package. Save more money by sharing the trip with your friends.
## What it does
Our custom algorithm analyzes your past behavior, your friends' trends, and what you're looking for, to provide you with only the best results. Forget about comparing a slew of numbers and words and focus on finding your next trip.
We've revamped the buying experience; share select trips with friends to let them know about great deals and future plans. Sharing a trip kickstarts an auction process - as the trip grows in popularity, the price decreases accordingly! MetBlue combines a ranked infinite scroll with interactive tickets.
It's a win-win for both the consumers and airlines: buyers receive discounts while airlines can fill flights on short notice. By increasing social proof and popularity, we fill empty seats at competitive prices.
## How I built it
We rely on MongoDB and Ruby to reference our data and provide your next trip. Our UI and UX heavily emphasizes a more minimalistic and fun experience over the commonplace arduous ticket hunting process. A combination of front end development tools creates a streamlined searching and check out flow.
## Challenges I ran into
Mapping and developing connections based on social data and individual search filters was a major hurdle to creating a smooth experience. Developing a high-speed infinite scrolling interface and integrating data to MetBlue's unique ticket system was also a major concern.
## Accomplishments that I'm proud of
Our design has come together very cohesively, and our search functions are efficient and quick.
|
## Inspiration
We were inspired by JetBlue's challenge to utilize their data in a new way and we realized that, while there are plenty of websites and phone applications that allow you to find the best flight deal, there is none that provide a way to easily plan the trip and items you will need with your friends and family.
## What it does
GrouPlane allows users to create "Rooms" tied to their user account with each room representing an unique event, such as a flight from Toronto to Boston for a week. Within the room, users can select flight times, see the best flight deal, and plan out what they'll need to bring with them. Users can also share the room's unique ID with their friends who can then utilize this ID to join the created room, being able to see the flight plan and modify the needed items.
## How we built it
GrouPlane was built utilizing Android Studio with Firebase, the Google Cloud Platform Authentication API, and JetBlue flight information. Within Android Studio, Java code and XML was utilized.
## Challenges we ran into
The challenges we ran into was learning how to use Android Studio/GCP/Firebase, and having to overcome the slow Internet speed present at the event. In terms of Android Studio/GCP/Firebase, we were all either entirely new or very new to the environment and so had to learn how to access and utilize all the features available. The slow Internet speed was a challenge due to not only making it difficult to learn for the former tools, but, due to the online nature of the database, having long periods of time where we could not test our code due to having no way to connect to the database.
## Accomplishments that we're proud of
We are proud of being able to finish the application despite the challenges. Not only were we able to overcome these challenges but we were able to build an application there functions to the full extent we intended while having an easy to use interface.
## What we learned
We learned a lot about how to program Android applications and how to utilize the Google Cloud Platform, specifically Firebase and Google Authentication.
## What's next for GrouPlane
GrouPlane has many possible avenues for expansion, in particular we would like to integrate GrouPlane with Airbnb, Hotel chains, and Amazon Alexa. In terms of Airbnb and hotel chains, we would utilize their APIs in order to pull information about hotel deals for the flight locations picked for users can plan out their entire trip within GrouPlane. With this integration, we would also expand GrouPlane to be able to inform everyone within the "event room" about how much the event will cost each person. We would also integrate Amazon Alexa with GrouPlane in order to provide users the ability to plane out their vacation entirely through the speech interface provided by Alexa rather than having to type on their phone.
|
## Inspiration
At the University of Toronto, accessibility services are always in demand of more volunteer note-takers for students who are unable to attend classes. Video lectures are not always available and most profs either don't post notes, or post very imprecise, or none-detailed notes. Without a doubt, the best way for students to learn is to attend in person, but what is the next best option? That is the problem we tried to tackle this weekend, with notepal. Other applications include large scale presentations such as corporate meetings, or use for regular students who learn better through visuals and audio rather than note-taking, etc.
## What it does
notepal is an automated note taking assistant that uses both computer vision as well as Speech-To-Text NLP to generate nicely typed LaTeX documents. We made a built-in file management system and everything syncs with the cloud upon command. We hope to provide users with a smooth, integrated experience that lasts from the moment they start notepal to the moment they see their notes on the cloud.
## Accomplishments that we're proud of
Being able to integrate so many different services, APIs, and command-line SDKs was the toughest part, but also the part we tackled really well. This was the hardest project in terms of the number of services/tools we had to integrate, but a rewarding one nevertheless.
## What's Next
* Better command/cue system to avoid having to use direct commands each time the "board" refreshes.
* Create our own word editor system so the user can easily edit the document, then export and share with friends.
## See For Your Self
Primary: <https://note-pal.com>
Backup: <https://danielkooeun.lib.id/notepal-api@dev/>
|
partial
|
## Inspiration
I'm lazy and voice recognition / nlp continues to blow my mind with its accuracy.
## What it does
Using Voice recognition and Natural Language Processing you can talk to your browser and it will do your bidding, no hands required!
I also built in "Demonstration" so if ever the AI doesn't do what you want you can give it a sample command and the Demonstrate what to click on / type while the bot watches! All of these training demonstrations get added to a centralized database so that everyone together makes the bot smarter!
## How I built it
Chrome Extension, Nuance APIs MIX.NLU and Voice Recognition, Angular JS, Firebase
## Challenges I ran into
Nuance API took a little while to figure out, also sending inputs into the browser on the right elements is tricky.
## Accomplishments that I'm proud of
Making is all work together and in such a short time! :D
## What I learned
## What's next for AI-Browser
I want to take the time to properly implement the training portion
|
## Inspiration
A study recently done in the UK learned that 69% of people above the age of 65 lack the IT skills needed to use the internet. Our world's largest resource for information, communication, and so much more is shut off to such a large population. We realized that we can leverage artificial intelligence to simplify completing online tasks for senior citizens or people with disabilities. Thus, we decided to build a voice-powered web agent that can execute user requests (such as booking a flight or ordering an iPad).
## What it does
The first part of Companion is a conversation between the user and a voice AI agent in which the agent understands the user's request and asks follow up questions for specific details. After this call, the web agent generates a plan of attack and executes the task by navigating the to the appropriate website and typing in relevant search details/clicking buttons. While the agent is navigating the web, we stream the agent's actions to the user in real time, allowing the user to monitor how it is browsing/using the web. In addition, each user request is stored in a Pinecone database, to the agent has context about similar past user requests/preferences. The user can also see the live state of the web agent navigation on the app.
## How we built it
We developed Companion using a combination of modern web technologies and tools to create an accessible and user-friendly experience:
For the frontend, we used React, providing a responsive and interactive user interface. We utilized components for input fields, buttons, and real-time feedback to enhance usability as well as integrated VAPI, a voice recognition API, to enable voice commands, making it easier for users with accessibility needs. For the Backend we used Flask to handle API requests and manage the server-side logic. For web automation tasks we leveraged Selenium, allowing the agent to navigate websites and perform actions like filling forms and clicking buttons. We stored user interactions in a Pinecone database to maintain context and improve future interactions by learning user preferences over time, and the user can also view past flows. We hosted the application on a local server during development, with plans for cloud deployment to ensure scalability and accessibility. Thus, Companion can effectively assist users in navigating the web, particularly benefiting seniors and individuals with disabilities.
## Challenges we ran into
We ran into difficulties getting the agent to accurately complete each task. Getting it to take the right steps and always execute the task efficiently was a hard but fun problem. It was also challenging to prompt the voice agent such to effectively communicate with the user and understand their request.
## Accomplishments that we're proud of
Building a complete, end-to-end agentic flow that is able to navigate the web in real time. We think that this project is socially impactful and can make a difference for those with accessibility needs.
## What we learned
The small things that can make or break an AI agent such as the way we display memory, how we ask it to reflect, and what supplemental info we give it (images, annotations, etc.)
## What's next for Companion
Making it work without CSS selectors; training a model to highlight all the places the computer can click because certain buttons can be unreachable for Companion.
|
# Quality Content
a pennapps spring 2018 submission
## Inspiration
Many popular sites, like Reddit, Google Search and Hacker News are essentially a list of links. It's common to click one link, read it for a bit, then go back, click another link, read it, go back, etc.
We thought it would be desirable to avoid this pattern and instead allow a user to "always go forward."
## What it does
When you click on a Reddit link, Google search result, or Hacker News post, a list of related links will appear in a small box in the upper-right. The list of related links is chosen by processing the list of links from the search query / subreddit you just came from. The most related links are chosen by using natural language processing. Of course, we exclude the link you recently visited from the list of related links.
In addition, the extension responds to voice commands after hitting Enter. Saying "Google dogs" will google dogs and "Reddit Ask Reddit" will go to the /r/AskReddit subreddit. When our popup menu of links appear, you can say "first post" or "second one" to go to that related link.
## How we built it
Using Google's documentation for building Chrome extensions, the API, documentation for the APIs we query, our text editors, and git. Special thanks to Cortical for their NLP API, Google Cloud Platform for Google Search, and Reddit API for their ease of use.
## Challenges we ran into
For reasons outside of our control (we think), Reddit's API started giving us 503 Service Unavailable errors. Reddit servers went down for the first time in weeks.
## Accomplishments that we're proud of
This was our first time hacking together a chrome extension, and it was awesome to develop for Reddit, Google, and Chrome, apps we use every day. After satisfactorily completing the MVP, we were able to dive into building voice command functionality.
## What we learned
* Chrome Extensions are quite powerful.
* There's a standard JS API for speech recognition.
* NLP is awesome
* The API will stop working at the least opportune time.
## What's next for Quality Content
We'd like to see if we can generalize this idea to more sites. We might also want to scrape together $5 so we can put the extension on the Chrome Web Store.
## Built With
* JavaScript
* CSS
* Google Chrome
* Reddit API
* Google Search API
* Cortica NLP API
|
partial
|
## Inspiration
We wanted to help people who have awesome ideas but don't necessarily speak "code." So, why not let them speak... literally? By using speech-to-text, we’re making website creation as easy as having a conversation—no typing, no fuss, just talk and watch your site come to life! 🎤✨
## What it does
Kevin utilizes ONLY your voice to generate picture-perfect websites. Speak your vision, and Kevin handles the rest—turning your words into fully functional, responsive websites with accurate layouts, stunning visuals, and clean code. People with visual impairment or physical disabilities can discover the magic of coding with our hands off approach to dev tooling.
## How we built it
Using Web Speech API, we turned your words into the website of your imagination by integrating it with generative AI to instantly create and design responsive, user-friendly web pages. Our amazing agent Kevin is powered by the high performance architecture of Groq.
## Challenges we ran into
We encountered several challenges during development, including fine-tuning the speech-to-text system to reliably detect trigger words, accurately parse spoken instructions, and recognize the final trigger word to complete the process. Additionally, we struggled with gathering and structuring the necessary data to send to our API for seamless AI-driven website generation, requiring extensive troubleshooting to ensure smooth, uninterrupted functionality.
## Accomplishments that we're proud of
We’re incredibly proud of getting the app to function as intended, from accurately recognizing voice commands to generating fully functional websites using generative AI. Overcoming the technical challenges to deliver a smooth and reliable user experience is a significant achievement for our team.
## What we learned
Never use a serverless backend with a server.
## What's next for FrontendKevin
Next, we plan to expand the range of design options, add support for more complex website features as well as improve the accuracy of the website generation. We’re also looking to enhance user customization and explore integrations with other AI tools to further streamline the development process.
|
## Inspiration
The inspiration behind Interview.AI emerged from our collective experiences with the challenges of preparing for job interviews. It can be difficult to find someone to practice interviews with, and often they may not fully understand the structure of an interview from the interviewer’s perspective. By leveraging cutting-edge natural language processing and machine learning technologies we provide a mock interview platform equipped with a 2D avatar interviewr that is always ready to support your preparation.
## What it does
Interview.AI is an AI-powered mock interview platform designed to enhance job preparation through personalized and interactive features. It generates custom questions based on the company, job description, interview type, and candidate's resume, ensuring relevant practice sessions. The platform includes an audio and video display of the AI interviewer, creating a realistic interview environment. The platform provides an overall summary and tailored advice at the end of each session, analyzing both the content and emotional behavior of the user's response, and then giving feedback on the strengths and areas for improvement. Additionally, Interview.AI offers a detailed interview transcript for review and continuous learning, empowering candidates to build confidence and improve their interview skills.
## How we built it
Frontend/Backend: Full stack web application using React.ts, Jango with Rest framework, and PyMongo.
AI Pipeline: The pipeline will initiate an LLM agent using GPT-4o as a mock interviewer and input the interviewee's information. During the interview loop, the agent will:
1. Generate interview questions using GPT-4o.
2. Generate audio from text using OpenAI TTS.
3. Generate video using Wav2Lip on Intel Developer Cloud Instance.
After the user responds, the agent will:
4. Convert the response audio to text using the Gorq API.
5. Feed the text to GPT-4o for the next question.
6. Analyze the emotion of the response through the Hume API.
By the end of the interview practice, the agent will provide overall interview feedback on the interviewee's strengths and weaknesses based on the emotional analysis and the question-answer history using another GPT-4o request.
## Challenges we ran into
1. Deploy to Speech2Face model.
First, we tried many ways to deploy the model into the cloud and enable the API to call it from the backend. Many model deployment interfaces either have a huge latency or are not supported through an API call. Eventually, we decided to use Intel’s Cloud Developer instance, which perfectly solved our problem and provided a low-latency, easy-to-integrate solution.
The runtime of Speech2Face was originally high, so we could not create instant video replies without boosting performance. To solve the problem, we used the Intel special optimization algorithm and OpenVINO to optimize the model, which provided us with a 20% speed improvement.
2. Prompt engineering
It was a trial and error process to adjust the prompts to get suitable outputs in the formats we wanted to return. We ended up using some strategies including giving the gpt a role and asking it to keep its response under a word limit in some cases.
## Accomplishments that we're proud of
1. We were able to combine the APIs from many of the sponsors to achieve the application's goals.
2. We were able to figure out how to add our model to a cloud server and could directly call it from our backend.
3. We used the amazing model optimization algorithms from Intel to boost our model by 20%.
## What we learned
1. Intel technologies: Cloud developer instance, OpenVino
2. Django rest framework + MongoDb integration
3. Many API frameworks from Groq, Hume, OpenAI. FastAPI
4. How to host a model in the cloud and call its inference function using an SSH API.
## What's next for Interview.AI
1. We will also support video input so the Hume API can give the user feedback on facial expressions and body language.
2. We will also support the interviewee's training since most companies currently do not provide a great way for interviewees to practice interviewing candidates except by participating in shadow interviews and attending workshops.
3. We can improve the efficiency and quality of the video output to make the output video more realistic.
|
## Inspiration
We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals.
## What it does
ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language.
## How we built it
We built the frontend UI using React, Javascript, HTML and CSS.
For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM.
Finally, for user authentication, we made use of Firebase.
## Challenges we faced + What we learned
When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug.
Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that.
## Accomplishments that we're proud of
This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay.
We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things.
## What's next for ReadRight
As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
|
losing
|
## Inspiration
When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body.
As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately.
## What it does
The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals.
## How we built it
Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality.
## Challenges we ran into
The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave.
## Accomplishments that we're proud of
We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying.
## What we learned
One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people.
## What's next for SkinSkan
We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
|
## Inspiration
Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems.
It helps people in easy recycling of items, as they are divided into 12 different classes.
It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not.
## What it does
On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling.
On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue.
The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not.
Covid 19 page classify if you are having covid or not
Plant\_Disease page predicts if a plant is having a disease or not.
## How we built it
I built it using streamlit and OpenCV.
## Challenges we ran into
Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint.
## Accomplishments that we're proud of
## What we learned
Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages.
## What's next for Arogya
|
## Inspiration
Provide valuable data to on-premise coordinators just seconds before the firefighters make entry to a building on fire, minimizing the time required to search for and rescue victims. Reports conditions around high-risk areas to alert firefighters for what lies ahead in their path. Increase operational awareness through live, autonomous data collection.
## What it does
We are able to control the drone from a remote location, allowing it to take off, fly in patterns, and autonomously navigate through an enclosed area in order to look for dangerous conditions and potential victims, using a proprietary face-detection algorithm. The web interface then relays a live video stream, location, temperature, and humidity data back to the remote user. The drone saves locations of faces detected, and coordinators are able to quickly pinpoint the location of individuals at risk. The firefighters make use of this information in order to quickly diffuse life-threatening conditions with increased awareness of the conditions inside of the affected area.
## How we built it
We used a JS and HTML front-end, using Solace's PubSub+ broker in order to relay commands sent from the web UI to the drone with minimal latency. Our AI stack consists of an HaaR cascade that finds AI markers and detects faces using a unique face detection algorithm through OpenCV. In order to find fires, we're looking for areas with the highest light intensity and heat, which instructs the drone to fly near and around the areas of concern. Once a face is found, a picture is taken and telemetry information is relayed back to the remote web console. We have our Solace PubSub+ broker instance running on Google Cloud Platform.
## Challenges we ran into
Setting up the live video stream on the Raspberry Pi 4B proved to be an impossible task, as the .h264 raw output from the Raspberry Pi's GPU was impossible to encode into a .mp4 container on the fly. However, when the script was run on Windows, the live video stream, as well as all AI functionality, worked perfectly. We spent a lot of time trying to debug the program on the Raspberry Pi in order to acquire our UDP video live stream, as all ML and AI functionality was inoperational without it. In the end, we somehow got it to work.
Brute forcing into every port of the DJI Tello drone in order to collect serial output took nearly 5 hours and required us to spin up a DigitalOcean instance in order to allow us access to the drone's control surfaces and video data.
## Accomplishments that we're proud of
We were really proud to get the autonomous flying of the drone working using facial recognition. It was quite the task to brute force every wifi port on the drone in order to manipulate it the way we wanted it to, so we were super happy to get all the functionality working by the end of the makeathon.
## What we learned
You can't use GPS indoors because it's impossible to get a satellite lock.
## What's next for FireFly
Commercialization.
|
winning
|
## Inspiration
Throughout the hackathon, our team was intrigued by staff members sorting through trash bins in order to ensure recycling, compost and landfill waste was separated. After further investigation and a quick staff member interview, we learned about San Francisco's "zero-waste" program. This program requires event hosts within city limits to sort their waste into the three specified categories to reduce landfill use, with penalties or bans for events that fail to meet the waste sorting criteria. The demand for waste sorting has grown so much that private companies, such as Green Mary, have entered the sector.
Our team came up with an innovative solution, Smart Bin. Smart Bin automates waste sorting, reducing manual labor costs and reducing pollution.
## What it does
Smart Bin is a smart, standalone trash disposal system. It scans incoming waste and sorts it as it goes through our system.
## How we built it
Our system is built on top of a Raspberry Pi. The Raspberry Pi hosts a Yolo11 object detection system that scans incoming waste and makes an inference as to what section it should be directed to (landfill, compost, recycling). The output from out model is then sent to an Arduino board that controls a servo motor using PMW. Our motor directs the trash to its corresponding bin, effectively sorting it.
## Challenges we ran into
One challenge we ran into was the internet connection at the venue. We wanted to download open-source data sets from Roboflow Universe to train our Object Detection model to segregate different types of waste. We also dealt with hardware difficulties during the event, running the servo motors took a lot of juice from the batteries so we had to prepare a lot beforehand. We also modeled designs of our props to be 3d printed during the event, which took a long time to print where we had to optimize different slicing methods to improve from 7 hours of printing to sub 5 hours of print time. We also had limitations when it comes to computing power of the raspberry pi, where all of our detection model, and functions are all ran on this mini-computer.
## Accomplishments that we're proud of
We are extremely proud of what we built. We were able to build Smart Bin as a standalone system, one that is only dependent on a Raspberry Pi, an Arduino Board, and a servo motor. The total cost of our unit comes out to less than $50, making it highly accessible and likely to make an impact on the environment.
## What we learned
Through this project, we gained a deeper understanding of the complexities involved in waste management and automation. We learned about the real-world implications of waste diversion requirements for events and how crucial it is to accurately sort trash. Additionally, we improved our skills in hardware integration, machine learning, and rapid prototyping, which were crucial to successfully building a working product in a limited time frame.
## What's next for SmartBin
Our next steps involve expanding the scalability of the system across urban environments by integrating even more advanced real-time data analytics and optimizing waste collection routes through machine learning algorithms. We aim to enhance the mobile experience by allowing users to locate and interact with SmartBins seamlessly. Additionally, we plan to collaborate with municipalities and large organizations to implement SmartBin on a larger scale, helping cities reduce waste inefficiencies and environmental impact.
|
## Inspiration
As busy students, we noticed that students loved “shooting” garbage into bins. When it came to throwing out garbage and waste, the bins that held the garbage had a small distance between each opening for each separate type of waste for the trash to go. This meant that students would have to walk long distances to accurately dispose of our waste. We also noticed that many students were uneducated on what objects are recyclable or not.
These two issues lead to people littering and/or throwing their waste in the wrong bins, leading to increased pollution and climate change.
## What it does
To solve this issue, we created a bin with a garbage and recycling section. Students will throw their trash/recycling into their desired target. A camera and sensors within the garbage can will determine whether they threw their waste into the right target. Then, a servo will deposit the piece of waste into the correct bucket, and the user will gain points if they correctly dispose of their waste, or lose points if they incorrectly dispose of it. These points can be used to redeem prizes around campuses with these garbage cans.
## How we built it
BasketBin’s main frame includes a repurposed cardboard box with two carved holes on the top. We used servo motors, ultrasonic and PIR sensors, and a webcamera to create the contraption which deposits the waste into the correct bin.
We used a Flask server for the application and Supabase for the database to store player information. Player score entry is updated from the python script that adds or removes score based on if the user placed the waste into the correct bin.
## Challenges we ran into
In this hackathon, with Flask and Supabase being new technologies to us, we had to spend lots of time figuring out how to link the webpage, database, CV, and hardware. Each component had it’s own challenges such as fixing the sensors for our garbage sorter, connecting the computer vision to the hardware, and fetching and sending entries to the database. After perseverance, we problem solved through the issues and came to a solution.
## Accomplishments that we're proud of
We are proud of the seamless design that allows users to interact with the garbage can in a fun and interactive way which also informs users on waste management. Additionally we are proud of the interconnectedness of our project, with all moving parts such as the hardware, computer vision, database, and webpage.
## What's next for Garbage sorter
Looking ahead, we have several exciting plans for our project. We would like to integrate this product into university campuses around Canada to promote an environment where students can be informed of recycling policies in a fun and exciting manner.
|
## Inspiration
At many public places, recycling is rarely a priority. Recyclables are disposed of incorrectly and thrown out like garbage. Even here at QHacks2017, we found lots of paper and cans in the [garbage](http://i.imgur.com/0CpEUtd.jpg).
## What it does
The Green Waste Bin is a waste bin that can sort the items that it is given. The current of the version of the bin can categorize the waste as garbage, plastics, or paper.
## How we built it
The physical parts of the waste bin are the Lego, 2 stepper motors, a raspberry pi, and a webcam. The software of the Green Waste Bin was entirely python. The web app was done in html and javascript.
## How it works
When garbage is placed on the bin, a picture of it is taken by the web cam. The picture is then sent to Indico and labeled based on a collection that we trained. The raspberry pi then controls the stepper motors to drop the garbage in the right spot. All of the images that were taken are stored in AWS buckets and displayed on a web app. On the web app, images can be relabeled and the Indico collection is retrained.
## Challenges we ran into
AWS was a new experience and any mistakes were made. There were some challenges with adjusting hardware to the optimal positions.
## Accomplishments that we're proud of
Able to implement machine learning and using the Indico api
Able to implement AWS
## What we learned
Indico - never done machine learning before
AWS
## What's next for Green Waste Bin
Bringing the project to a larger scale and handling more garbage at a time.
|
losing
|
## Inspiration
Risk is everywhere and part of everything that we do. We have a certain level of financial risk depending on how much we save. We have a certain risk of dying depending on how old we are. We have a certain risk of surviving the next recession or losing our job. We have a certain risk of purchasing a car which gets recalled.
Companies have extensive departments to help them mitigate their risk. While to a certain extent consumers can share in those risk mitigation plans, such as when a credit card detects fraud on your card and shuts down your account for you, they are largely on their own unless they have purchased protection against an insurable risk.
## What it does
The app does personalized risk management and mitigation. Using various information such as your location, your credit card statements, the make and model of your vehicle, and the APIs of any internet of things devices you own, it gathers a variety of data about the relevant topics and turns it into a prioritized and easy to digest set of tasks.
Now, many will say that this is already done through text and notifications. While it can in part be done that way, that is a dysfunctional system which causes important tasks to be missed and critical insights lost. Email in the words of venture capitalist Paul Graham is essentially a bad to-do list and that is what notifications currently are as well. A battery needing replacement in an IoT unit is far lower priority than ensuring you rid your fridge of romaine lettuce after an E-coli outbreak. I couldn't imagine starting a meeting with my boss without prioritizing topics. That is desperately needed in personal risk management as well.
## How I built it
The app is currently built as two html/php pages, with one page for the graphical user interface on the welcome page and one page for the rest of the functionality.
Financial Risk
While many different things could be calculated in this category, the app specifically looks at 4. It looks at career outlook, home price appreciation, savings stress test, and a savings depletion assessment.
Home Risk
Home risk is the broad category of threats which relate to one's home or one's location in general. It contains three categories.
The first is an input for IoT devices. In this case, we have an AquaSwift, which is a water depth monitor for rural well owners that ensures they know when they are running low on water. The AquaSwift can both inform on risk and be a source of risk itself. If water is low or the well is frozen, it will tell you well in advance so you can take action. However, the AquaSwift is also a minor source of risk as its battery can die, something handled in this case too.
The second is the various here.com APIs which support the hidden backend, While it does just show a map, here.com APIs are also used for weather, geodeocoding, and providing the location information which will inform other things such as flood risk or fire risk.
The third is where insurance companies would put routine tasks such as scheduling a furnace inspector or arranging to have the roof checked after a hail storm.
Product Risk
Product risk takes a look at two specific categories for the purpose of this demo, however the government provides data for many more categories and these could be very easily implemented.
Firstly, it looks at car recalls. One can search by year,make, and model to find if any recalls have been issues for the car and if so, for what. If a recall is found, the issue is added to the list of ongoing problems.
Secondly, it looks at food recalls. The search bar allows the user to enter an allergen and all recalls within the past 60 days due to that allergen are printed out below. The app then adds checking for those allergens as a priority task. The recent romaine lettuce outbreak is one very recent use case for this app.
## Challenges I ran into
Data is highly inconsistent, even within the same API. Here.com's API will sometimes return the neighbourhood instead of the city or the county instead of province. In addition, getting megabytes of data up to even a local MariaDB server can be time consuming, especially when the data has faults.
|
## Inspiration
## What it does
## How we built it
* Notion API
* Send data to notion to display on dashboard of issues
* Using Zapier to assign nurses
## Challenges we ran into
* working with Notion API
## Accomplishments that we're proud of
* Getting database functions with Notion's API to work
## What we learned
## What's next for Jira4Hospitals
* coded front end
* integration with netlify
|
## Inspiration
Customer acquisition costs are high for credit cards. So high, that banks will even give out new iPads for anyone who signs up for their credit cards. On our flight from Montréal to Boston, we saw credit card ads plastered everywhere, offering sky-high sign-on bonuses. While banks are eager to reduce their customer acquisition costs, customers are bogged down by confusing terms and conditions, convoluted points programs, and choice fatigue.
With enough knowledge, customers can use the strategy of credit churning - obtaining multiple credit cards to capitalize on high sign-on bonuses and rewards. However, identifying the right credit card for oneself is usually a tedious process, and the card that a user selects might not fulfill all their specific needs.
## What it does
CardMaster simplifies the credit card churning process, offering our users personalized credit card recommendations based on their demographics (occupation, income, credit score, spending habits and travel preferences), saving the time and effort required for extensive research. Apart from letting users reap the rewards, it also immerses them through mixed-reality experiences showcasing a view of the specific benefits cards’ offers would bring them. Put on your AR/VR headset and, with a few swipes, our "cardmasters" are virtually transported to the destinations you've been dreaming of visiting. They can virtually explore luxury resorts, see the world from your favorite airline's first-class cabin, and experience the perks of each recommended credit card firsthand.
## How we built it
We built the mixed-reality front-end with AR/VR using the Apple XCode-Beta development kit, using Swift, SwiftUI, RealityKit, Alamofire, concurrent MVVM design architecture, and of course, VisionOS. We created a microservice backend, using Python, Flask, Bitarray, Scikit-learn, NumPy, Pandas, Google Cloud Platform (GCP), Cohere, and Docker. As a bonus, we built a demo website to showcase CardMaster, using JavaScript, React, and Tailwind at <https://cardmaster-hackharvard.netlify.app/>. Prior to developing the interface, we designed mockups using Figma, keeping simplicity, ease-of-use, and aesthetics in mind to create a seamless user experience.
For added functionality, we also integrated an AI financial advisor which we prompt engineered for Cohere’s large language model.
The recommendation algorithm was accomplished using a Jaccard Similarity Index to find the similarities between users based on their demographics, such as credit range, annual budget, income, profession, and travel frequency. This was then mapped to the credit card tastes of each user. Linear regression was then applied to these results to determine whether or not each card was deemed a suitable fit.
## Challenges we ran into
The VisionOS frontend environment was particularly challenging to work with due to the difference in the base components for VisionOS compared to conventional Swift components, which posed an interesting challenge for our UI/UX design. Additionally, navigating the VisionOS environment was also limited due to the lack of resources available online, as the Apple Vision Pro has not yet been released. We encountered difficulties in accessing support for many features we wish to incorporate.
One of the more difficult issues we faced was designing our recommendation algorithm, which could be thought of as a “black box”. We knew that persons with different credit score ranges, budget preferences, income, etc. would prefer different credit cards, but generalizing outputs to be statistically similar was at the core of our backend algorithm. Devising a solution to this involved ample research and randomized input dataset generation to train our model.
## Accomplishments that we're proud of
Overall, we are very happy to have been able to work with the Apple Vision Pro, considering it has not yet been released.
We’re proud of coming up with an innovative credit card recommendation algorithm and making the credit card rewards and application process more transparent.
## What we learned
Our team had a diverse mix of experience from different backgrounds. The key takeaway from this experience was the ability to cross-collaborate on different ends of the project, teaching one another new technologies of the software development process. Some of us delved into the intricacies of Google Cloud databases, others explored unfamiliar frontend frameworks, and some of us dived deep into some cool math for our backend.
CardMaster turned out to be rewarding for all - our users would reap its benefits, while for us, designing and developing it was an immensely gratifying experience. We’re very happy with our completed project!
## What's next for CardMaster
At HackHarvard, we used the opportunity to build CardMaster, particularly the backend services, to establish a foundation for a broader credit churning and optimization platform that we are developing. CardMaster churns credit cards, providing users for a call-to-action to improve their financial potential. The future phases of building CardMaster would address the management and optimized usage of a multi-card wallet for real-time transactions. Through these developments, our work will eventually improve the usage of credit cards at all stages of the churning process - before and after.
## Domain Name From GoDaddy
getrichwith.us
(for real)
|
losing
|
## Inspiration
Curio aims to save you time by aggregating and curating information and reviews about products online to help you make more informed purchasing decisions.
## What it does
Curio is a browser extension that extracts the product name and brand from the web page of the product you're looking at, searches the web for reviews about that product, summarizes those reviews, determines an overall sentiment for the product, displays keywords associated with the product, and lists and summarizes different reviews about that product.
Users can bookmark products to look at later, and add likes for reviews they find useful. Curio saves all product entries into a database, as well as any inputs from users.
Users can search Curio's curated database by product name, brand, or keywords to find products that match exactly what they're looking for.
## How we built it
Curio is a Chrome extension built with React and TypeScript, and connects to a backend written with TypeScript, NodeJS, Express, Prisma and powered by Cohere and CockroachDB.
## What's next for Curio
Adding more functionality and better insights:
* Display similar products that have better reviews
* More powerful searching and sorting features for the Curio database
* Social features for user such as sharing/exporting curated product lists
|
## Inspiration
We wanted to find ways to make e-commerce more convenient, as well as helping e-commerce merchants gain customers. After a bit of research, we discovered that one of the most important factors that consumers value is sustainability. According to FlowBox, 65% of consumers said that they would purchase products from companies who promote sustainability. In addition, the fastest growing e-commerce platforms endorse sustainability. Therefore, we wanted to create a method that allows consumers access to information regarding the company's sustainability policies.
## What it does
Our project is a browser extension that allows users to browse e-commerce websites while being able to check the product manufactures'' sustainability via ratings out of 5 stars.
## How we built it
We started building the HTML as the skeleton of the browser extension. We then proceeded with JavaScript to connect the extension with ChatGPT. Then, we asked ChatGPT a question regarding the general consensus of a company's sustainability. We run this review through sentimental analysis, which returns a ratio of positive and negative sentiment with relevance towards sustainability. This information is then converted into a value out of 5 stars, which is displayed on the extension homepage. We finalized the project with CSS, making the extension look cleaner and more user friendly.
## Challenges we ran into
We had issues with running servers, as we struggled with the input and output of information.
We also ran into trouble with setting up the Natural Language Processing models from TensorFlow. There were multiple models trained using different datasets and methods, despite the fact they all use TensorFlow, they were developed at different times, which means different versions of TensorFlow were used. It made the debugging process a lot more extensive and made the implementation take a lot more time.
## Accomplishments that we're proud of
We are proud that we were able to create a browser extension that makes the lives of e-commerce developers and shoppers more convenient. We are also proud of making a visually appealing extension that is accessible to users. Furthermore, we are proud of implementing modern technology such as ChatGPT within our approach to solving the challenge.
## What we learned
We learned how to create a browser extension from scratch and implement the OpenAi API to connect our requests to ChatGPT. We also learned how to use Natural Language Processing to detect how positive or negative the response we received from ChatGPT was. Finally, we learned how to convert the polarity we received into a rating that is easy to read and accessible to users.
## What's next for E-commerce Sustainability Calculator
In the future, we would like to implement a feature that gives a rating to the reliability of our sustainability rating. Since there are many smaller and lesser known companies on e-commerce websites, they would not have that much information about their sustainability policies, so their sustainability rating would be a lot less accurate compared to a more relevant company. We would implement this by using the amount of google searches for a specific company as a metric to measure their relevance and then base a score using a scale that ranges the the number of google searches.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
losing
|
# Cook It!
**Cook It!** is a web service that is personalized to your tastes and for your taste. It uses the Amazon AWS Machine Learning API to learn your food preferences and to recommend recipes that you can make with the ingredients in your fridge. Just enter your ingredient list and select your meal type (from Breakfast, Main Course, and Dessert), and simply choose your dish from the many recipes that Cook It! has to offer.
# Inspiration
Being huge foodies, and very recently overworked college students, making delicacies that could satisfy our palates while being practical at the same time had started becoming increasingly impossible in these last few months. That is when we decided to make Cook It, something that would help us in our food exploration.
# How does it Work
We've collected data from two of the largest food recipe sources on the internet, *Yummly* and *Spoonacular* and ran Amazon AWS' industry standard regression on it to create an ML model that predicts the correlational success of a given set of ingredients. Moreover, this model evolves over time based on the user's own personal choices and the recipes he chooses to click on. All of this invisible to the user, all one has to do it enter a list of ingredients he might have on hand and wait for the magic to happen. Using web ratings and the past user recorded data, our algorithm creates a sorted list of recipes for the user to choose from starting from the top left.
# Challenges we Faced
Being just freshmen, exploring the field of ML was especially hard for us. Applying this to a genre like food where subjectivity prevails and reliable data was extremely hard to find, we had to hand sort a lot of our sources and train our model on around 10,000 existing ingredient combinations and their ratings derived from social networks to achieve a reliably consistent prediction model.
Integrating, consolidating and making the different technologies work together was another aspect that gave us a huge challenge.
# Accomplishments that we're Proud Of
Making something that we and our friends are extremely excited to use on a daily basis!
# What's Ahead
While our ML model is reasonably reliable right now, we aim to include a few more datasets and run some more training to make it better.
We are also planning to improve our recipe generation to get better suggestions.
|
## Inspiration
We were originally considering creating an application that would take a large amount of text and summarize it using natural language processing. As well, Shirley Wang feels an awkward obligation to incorporate IBM's Watson into the project. As a result, we came up with the concept of putting in an image and getting a summary from the Wikipedia article.
## What it does
You can input the url of a picture into the webapp, and it will return a brief summary in bullet point form of the Wikipedia article on the object identified within the picture.
## How we built it
We considered originally using Android Studio, but then a lot of problems occurred trying to make the software work with it, so we switched over to Google App Engine. Then we used Python to build the underlaying logic, along with using IBM's Watson to identify and classify photos, the Wikipedia API to get information from Wikipedia articles, and Google's Natural Language API to get only the key sentences and shorten them down to bullet point form while maintaining its original meaning.
## Challenges we ran into
We spent over 2 hours trying to fix a Google Account Authentication problem, which occurred because we didn't know how to properly write the path to a file, and Pycharm running apps is different from Pycharm running apps in its own terminal. We also spent another 2 hours trying to deploy the app, because Pycharm had a screwed up import statement and requirements file that messed up a lot of it.
## Accomplishments that we're proud of
This is our first hackathon and our first time creating a web app, and we're really happy that we managed to actually successfully create something that works.
## What I learned
Sometimes reading the API carefully will save you over half of your debugging time in the long run.
## What's next for Image Summarizer
Maybe we'll be able to make a way for the users to input a photo directly from their camera or their computer saved photos.
|
## Inspiration
Through going to big lectures at Berkeley, we've always found that it was difficult for courses to keep track of our attendance. Professors have gone through countless methods to track student attendance, such as requiring students to buy pricy hardware.
## What it does
Tracks student attendance **and** attentiveness based on the web traffic they create from their devices during class. Professors can keep track of attendance without calling role, thereby saving precious class time especially in large lecture halls. Our algorithm can detect when a student leaves a class early or arrives late, giving the teacher deeper insight into class attendance. There's an easy one-time MAC address to student name registration, and subsequently attendance is automatically taken so long as the student has wifi turned on (no connection required!) on one his/her phone, tablet, or laptop.
## How it works
We are using Cisco Meraki's Location and Dashboard APIs along with the wifi access points (Meraki MR33) already set up inside Memorial Stadium. These quad-radio access points intercept wifi and bluetooth signals from smartphones and laptops, and we sample the data every few seconds to capture data such as unique MAC addresses, websites visited, timestamps, and duration of stay. The reason why we use MAC addresses as a primary key to identify devices and students is because each device broadcasts a unique, immutable MAC address, allowing us to track them over hours, months, and years. Additionally, by polling data every few seconds for the duration of lecture, our web app can make sure students don't leave right after signing in, which is a huge problem for many attendance methods such as using iClicker or Google Forms. Meraki keeps a constant eye on all devices in the lecture hall and allows our web app to take note as soon as a student leaves lecture.
## The Technology
We use Node-Red to interface with the access points, and stream out JSON files to a MondoDB database in the cloud. We then input this data into Pandas Dataframes and use Plotly to visualize it. After some filtering and stats, we end up with a simple, clean interface for the teacher to use. We package all of this into a web app hosted on Google Cloud for you to see update in real time!
## Challenges we ran into
Meraki access points did not stay on consistently, and we stayed up till 8am trying to get our application to work more consistently. We also didn't have time to fully polish the application, so it currently contains just core functionality.
## Accomplishments that we're proud of
We learned how to use the Meraki API and Node-Red application in the fly, which was incredibly rewarding because most of us didn't have any experience working with networking. We also enjoyed working with the Meraki representatives and bouncing our ideas off them!
## What we learned
How to integrate various libraries and platforms into a single project! We also learned how to efficiently split up work and play to our strengths.
## What's next for Stay Present
We'd love to test this in a UC Berkeley lecture hall and work closely with professors to implement this attendance tracker. We're planning to refine our data with our new Meraki MR33 APs and see what other data we can extract from unsecured campus web traffic. We believe that this project is just one way that classroom learning can be changed for the better, and hope to see it in use in the future!
|
losing
|
## Inspiration
I'm taking a class called How To Make (Almost) Anything that will go through many aspects of digital fabrication and embedded systems. For the first assignment we had to design a model for our final project trying out different modeling software. As a beginner, I decided to take the opportunity to learn more about Unity through this hackathon.
## What it does
Plays like the 15 tile block puzzle game
## How we built it
I used Unity.
## Challenges we ran into
Unity is difficult to navigate, there were a lot of hidden settings that made things not show up or scale. Since I'm not familiar with C# or Unity I spent a lot of time learning about different methods and data structures. Referencing of objects across the different scripts and attributes is not obvious and I ran into a lot of those kinds of issues.
## Accomplishments that we're proud of
About 60% functional.
## What's next for 15tile puzzle game
Making it 100% functional.
|
## Inspiration
When was in school while studying, i had to go back to topics to understand stuff, ad then it used to take a lot of time to again read the stuff.
## What it does
It makes you understand important topics, in 3D for your better understanding of things.
## How I built it
I build it using
Unity
Android Studio
## Challenges I ran into
The biggest challenge was the size of the molecules, we reduced it by 90% by using a python script.
## Accomplishments that I'm proud of
I am proud of the small size of the app and the bigger thing the impact it can have on the 3rd world countries because if its small size.
## What I learned
We learned android studio , but the bigger part was we learnt to work together with strangers.
## What's next for Dekh\_Bhai
I can be very efficet in the 3rd world countries.
|
## Inspiration 🌈
Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient.
## What it does 💻
* Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages.
* The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data.
* Provides a visual timeline of user activity through commits, across all branches of a repository
Allows filtering commit data by user, highlighting the contributions of individuals
## How we built it ⚒️
The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app.
## Challenges we ran into ☣️☢️⚠️
Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me
## Accomplishments that we're proud of 😁
The user interface is so slay, especially the timeline page. The features work!
## What we learned 🧠
Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally
## What's next for GitInsights
* Filter by tags, more advanced filtering and visualizations
* Adding webhooks to the github repository to enable automatic analysis and real time changes
* Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ
* Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**).
|
losing
|


## Inspiration
We wanted to truly create a well-rounded platform for learning investing where transparency and collaboration is of utmost importance. With the growing influence of social media on the stock market, we wanted to create a tool where it will auto generate a list of recommended stocks based on its popularity. This feature is called Stock-R (coz it 'Stalks' the social media....get it?)
## What it does
This is an all in one platform where a user can find all the necessary stock market related resources (websites, videos, articles, podcasts, simulators etc) under a single roof. New investors can also learn from other more experienced investors in the platform through the use of the chatrooms or public stories. The Stock-R feature uses Natural Language processing and sentiment analysis to generate a list of popular and most talked about stocks on twitter and reddit.
## How we built it
We built this project using the MERN stack. The frontend is created using React. NodeJs and Express was used for the server and the Database was hosted on the cloud using MongoDB Atlas. We used various Google cloud APIs such as Google authentication, Cloud Natural Language for the sentiment analysis, and the app engine for deployment.
For the stock sentiment analysis, we used the Reddit and Twitter API to parse their respective social media platforms for instances where a stock/company was mentioned that instance was given a sentiment value via the IBM Watson Tone Analyzer.
For Reddit, popular subreddits such as r/wallstreetbets and r/pennystocks were parsed for the top 100 submissions. Each submission's title was compared to a list of 3600 stock tickers for a mention, and if found, then the submission's comment section was passed through the Tone Analyzer. Each comment was assigned a sentiment rating, the goal being to garner an average sentiment for the parent stock on a given day.
## Challenges we ran into
In terms of the Chat application interface, the integration between this application and main dashboard hub was a major issue as it was necessary to pull forward the users credentials without having them to re-login to their account. This issue was resolved by producing a new chat application which didn't require the need of credentials, and just a username for the chatroom. We deployed this chat application independent of the main platform with a microservices architecture.
On the back-end sentiment analysis, we ran into the issue of efficiently storing the comments parsed for each stock as the program iterated over hundreds of posts, commonly collecting further data on an already parsed stock. This issue was resolved by locally generating an average sentiment for each post and assigning that to a dictionary key-value pair. If a sentiment score was generated for multiple posts, the average were added to the existing value.
## Accomplishments that we're proud of
## What we learned
A few of the components that we were able to learn and touch base one were:
* REST APIs
* Reddit API
* React
* NodeJs
* Google-Cloud
* IBM Watson Tone Analyzer
-Web Sockets using Socket.io
-Google App Engine
## What's next for Stockhub
## Registered Domains:
-stockhub.online
-stockitup.online
-REST-api-inpeace.tech
-letslearntogether.online
## Beginner Hackers
This was the first Hackathon for 3/4 Hackers in our team
## Demo
The apply is fully functional and deployed using the custom domain. Please feel free to try it out and let us know if you have any questions.
<http://www.stockhub.online/>
|
## Inspiration
Students often do not have a financial background and want to begin learning about finance, but the sheer amount of resources that exist online make it difficult to know which articles are good for people to read. Thus we thought the best way to tackle this problem was to use a machine learning technique known as sentiment analysis to determine the tone of articles, allowing us to recommend more neutral options to users and provide a visual view of the different articles available so that users can make more informed decisions on the articles they read.
## What it does
This product is a web based application that performs sentiment analysis on a large scope of articles to aid users in finding biased, or un - biased articles. We also offer three data visualizations of each topic, an interactive graph that shows the distribution of sentiment scores on articles, a heatmap of the sentiment scores and a word cloud showing common key words among the articles.
## How we built it
Around 80 unique articles from 10 different domains were scraped from the web using scrapy. This data was then processed with the help of Indico's machine learning API. The API provided us with the tools to perform sentiment analysis on all of our articles which was the main feature of our product. We then further used the summarize feature of Indico api to create shorter descriptions of the article for our users. Indico api also powers the other two data visualizations that we provide to our users. The first of the two visualizations would be the heatmap which is also created through tableau and takes the sentimenthq scores to better visualize and compare articles and the difference between the sentiment scores. The last visualization is powered by wordcloud which is built on top of pillow and matplotlib. It takes keywords generated by Indico api and displays the most frequent keywords across all articles.The web application is powered by Django and a SQL lite database in the backend, bootstrap for the frontend and is all hosted on a google cloud platform app engine.
## Challenges we ran into
The project itself was a challenge since it was our first time building a web application with Django and hosting on a cloud platform. Another challenge arose in data scraping, when finding the titles of the articles, different domains placed their article titles in different locations and tags making it difficult to make one scraper that could abstract to many websites. Not only this, but the data that was returned by the scraper was not the correct format for us to easily manipulate so unpackaging dictionaries and such were small little tasks that we had to do in order for us to solve these problems. On the data visualization side, there was no graphic library that would fit our vision for the interactive graph, so we had to build that on our own!
## Accomplishments that we're proud of
Being able to accomplish the goals that we set out for the project and actually generating useful information in our web application based on the data that we ran through Indico API.
## What we learned
We learned how to build websites using Django, generate word clouds using matplotlib and pandas, host websites on google cloud platform, how to utilize the Indico api and researched various types of data visualization techniques.
## What's next for DataFeels
Lots of improvements could still be made to this project and here are just some of the different things that could be done. The scraper created for the data required us to manually run the script for every new link but creating an automated scraper that built the correct data structures for us to directly pipeline to our website would be much more ideal. Next we would expand our website to have not just financial categories but any topic that has articles about it.
|
## Inspiration
A [paper](https://arxiv.org/pdf/1610.09225.pdf) by Indian Institute of Technology researchers described that stock predictions using sentiment analysis had a higher accuracy rate than those analyzing previous trends. We decided to implement that idea and create a real-time, self-updating web-app that could visually show how the public felt towards the big stock name companies. What better way then, than to use the most popular and relatable images on the web, memes?
## What it does
The application retrieves text content from Twitter, performs sentiment analysis on tweets and generates meme images based on the sentiment.
## How we built it
The whole implementation process is divided into four parts: scraping data, processing data, analysing data, and visualizing data. For scraping data, we were planning to use python data scraping library and our target websites are the ones where users are active and able to speak out their own minds. We wanted unbiased and representative data to give us a more accurate result. For processing data, since we will get a lot of noise when we scrape data from websites and we try to make sure that our data is concise and less time-consuming to feed our algorithm, we planned to use regular expression to create a generic template where it ignores all the emoticons.
## Challenges we ran into
We encountered some technical, architectural, and timing issues. For example, in terms of technical problems, when we try to scrape data from twitter, we ran into noise issues. To clarify, a lot of users use emoticons and uncommon symbols when they post tweets, and those information is not helpful for us to find how users actually react to certain things. To solve this challenge, we came up with a idea where we use Regular Expression to form a template that only scrapes useful data for us. However, due to limited time during a hackathon, we increased efficiency by using Twitter’s Search API. Furthermore, we realized towards the end of our project that the MemeAPI had been discontinued and that it was not possible to generate memes with it.
## Accomplishments that we're proud of
* Designing the project based on the mechanism of multi servers
* Utilizing Google Cloud Platform, Twitter API, MemeAPI
## What we learned
* Google Could Platform, especially the Natural Language and Vision APIs
* AWS
* React
## What's next for $MMM
* Getting real time big data probably with Spark
* Including more data visualization method, possibly with D3.js
* Designing a better algorithm to find memes reflecting the sentiment of the public towards the company
* Creating more dank memes
|
partial
|
## Inspiration
Over the past year I'd encountered plenty of Spotify related websites that would list your stats, but none of them allowed me to compare my taste with my friends, which I found to be the most fun aspect of music. So, for this project I set out to make a website that would allow users to compare their music tastes with their friends.
## What it does
Syncify will analyze your top tracks and artists and then convert that into a customized image for you to share on social media with your friends.
## How we built it
The main technology is a node.js server that runs the website and interacts with the Spotify API. The Information is then sent to a Python Script which will take your unique spotify information and generate an image personalized to you with the information and a QR Code that further encodes information.
## Challenges we ran into
* Installing Node.JS took too long with various different compatibility issues
* Getting the Spotify API to work was a major challenge because of how the Node.JS didn't work well with it.
* Generating the QR Code as well as manipulating the image to include personalized text required multiple Python Packages, and approaches.
* Putting the site online was incredibly difficult because there were so many compatibility issues and package installation issues, in addition to my inexperience with hosting sites, so I had to learn that completely.
## Accomplishments that we're proud of
Everything I did today was completely new to me and being able to learn the skills I did and not give up despite how tempting it was. Being able to utilize the APIs, and learn NodeJS as well as develop some skills with web hosting felt really impressive because of how much I struggled with them throughout the hackathon.
## What we learned
I learnt a lot about documenting code, how to search for help, what approach to the workflow I should take and of course some of the technical skills.
## What's next for Syncify
I plan on uploading Syncify online so it's available for everyone and finishing the feature of allowing users to determine how compatible their music tastes are, as well as redesigning the shareable image so that the QR Code is less obtrusive to the design.
|
## Inspiration
What inspired the beginning of the idea was terrible gym music and the thought of a automatic music selection based on the tastes of people in the vicinity. Our end goal is to sell a hosting service to play music that people in a local area would want to listen to.
## What it does
The app has two parts. A client side connects to Spotify and allows our app to collect users' tokens, userID, email and the top played songs. These values are placed inside a Mongoose database and the userID and top songs are the main values needed. The host side can control the location and the radius they want to cover. This allows the server to be populated with nearby users and their top songs are added to the host accounts playlist. The songs most commonly added to the playlist have a higher chance of being played.
This app could be used at parties to avoid issues discussing songs, retail stores to play songs that cater to specific groups, weddings or all kinds of social events. Inherently, creating an automatic DJ to cater to the tastes of people around an area.
## How we built it
We began by planning and fleshing out the app idea then from there the tasks were split into four sections: location, front end, Spotify and database. At this point we decided to use React-Native for the mobile app and NodeJS for the backend was set into place. After getting started the help of the mentors and the sponsors were crucial, they showed us all the many different JS libraries and api's available to make life easier. Programming in Full Stack MERN was a first for everyone in this team. We all hoped to learn something new and create an something cool.
## Challenges we ran into
We ran into plenty of problems. We experienced many syntax errors and plenty of bugs. At the same time dependencies such as Compatibility concerns between the different APIs and libraries had to be maintained, along with the general stress of completing on time. In the end We are happy with the product that we made.
## Accomplishments that we are proud of
Learning something we were not familiar with and being able to make it this far into our project is a feat we are proud of. .
## What we learned
Learning about the minutia about Javascript development was fun. It was because of the mentors assistance that we were able to resolve problems and develop at a efficiently so we can finish. The versatility of Javascript was surprising, the ways that it is able to interact with and the immense catalog of open source projects was staggering. We definitely learned plenty... now we just need a good sleep.
## What's next for SurroundSound
We hope to add more features and see this application to its full potential. We would make it as autonomous as possible with seamless location based switching and database logging. Being able to collect proper user information would be a benefit for businesses. There were features that did not make it into the final product, such as voting for the next song on the client side and the ability for both client and host to see the playlist. The host would have more granular control such as allowing explicit songs, specifying genres and anything that is accessible by the Spotify API. While the client side can be gamified to keep the GPS scanning enabled on their devices, such as collecting points for visiting more areas.
|
## Inspiration
We wanted to make a simple product that sharpens blurry images without a lot of code! This could be used as a preprocessing step for image recognition or a variety of other image processing tasks. It can also be used as a standalone product to enhance old images.
## What it does
Our product takes blurry images and makes them more readable. It also improves IBM Watson's visual recognition functionality. See our powerpoint for more information!
## How we built it
We used python3 and the IBM Watson library.
## Challenges we ran into
Processing images takes a lot of time!
## Accomplishments that we're proud of
Our algorithm improves Watson's capabilities by 10% or more!
## What we learned
Sometimes, simple is better :)
## What's next for Pixelator
We could incorporate our product into an optical character recognition system, or try to incorporate our system as a preprocessing step in a pipeline involving e.g. convolutional neural nets to get even greater accuracy with the cost of higher latency.
|
losing
|
## Inspiration
gitpizza was inspired by a late night development push and a bout of hunger. What if you could order a pizza without having to leave the comfort of your terminal?
## What it does
gitpizza is a CLI based on git which allows you to create a number of pizzas (branches), add toppings (files), configure your address and delivery info, and push your order straight to Pizza Hut.
## How I built it
Python is bread and butter of gitpizza, parsing the provided arguments and using selenium to automatically navigate through the Pizza Hut website.
## Challenges I ran into
Pizza Hut's website is mostly created with angular, meaning selenium would retrieve a barebones HTML page and it would later be dynamically populated with JavaScript. But selenium didn't see these changes, so finding elements by ids and such was impossible. That, along with the generic names and lack of ids in general on the website meant that my only solution was the physically move the mouse and click on pixel-perfect positions to add toppings and place the user's order.
## Accomplishments that I'm proud of
Just the amount of commands that gitpizza supports. `gitpizza init` to start a new order, `gitpizza checkout -b new-pizza` to create a second pizza, `gitpizza add --left pepperoni` to add pepperoni to only the left half of your pizza, and `gitpizza diff` to see the differences between each side of your pizza. Visit [the repository](https://github.com/Microsquad/gitpizza) for the full list of commands
|
## Inspiration
Our inspiration came from the desire to address the issue of food waste and to help those in need. We decided to create an online platform that connects people with surplus food to those who need to address the problem of food insecurity and food waste, which is a significant environmental and economic problem. We also hoped to highlight the importance of community-based solutions, where individuals and organizations can come together to make a positive impact. We believed in the power of technology and how it can be used to create innovative solutions to social issues.
## What it does
Users can create posts about their surplus perishable food (along with expiration date+time) and other users can find those posts to contact the poster and come pick up the food. We thought about it as analogous to Facebook Marketplace but focused on surplus food.
## How we built it
We used React + Vite for the frontend and Express + Node.js for the backend. For infrastructure, we used Cloudflare Pages for the frontend and Microsoft Azure App Service for backend.
## Security Practices
#### Strict repository access permissions
(Some of these were lifted temporarily to quickly make changes while working with the tight deadline in a hackathon environment):
* Pull Request with at least 1 review required for merging to the main branch so that one of our team members' machines getting compromised doesn't affect our service.
* Reviews on pull requests must be after the latest commit is pushed to the branch to avoid making malicious changes after a review
* Status checks (build + successful deployment) must pass before merging to the main branch to avoid erroneous commits in the main branch
* PR branches must be up to date with the main branch to merge to make sure there are no incompatibilities with the latest commit causing issues in the main branch
* All conversations on the PR must be marked as resolved to make sure any concerns (including security) concerns someone may have expressed have been dealt with before merging
* Admins of the repository are not allowed to bypass any of these rules to avoid accidental downtime or malicious commits due to the admin's machine being compromised
#### Infrastructure
* Use Cloudflare's CDN (able to mitigate the largest DDoS attacks in the world) to deploy our static files for the frontend
* Set up SPF, DMARC and DKIM records on our domain so that someone spoofing our domain in emails doesn't work
* Use Microsoft Azure's App Service for CI/CD to have a standard automated procedure for deployments and avoid mistakes as well as avoid the responsibility of having to keep up with OS security updates since Microsoft would do that regularly for us
* We worked on using DNSSEC for our domain to avoid DNS-related attacks but domain.com (the hackathon sponsor) requires contacting their support to enable it. For my other projects, I implement it by adding a DS record on the registrar's end using the nameserver-provided credentials
* Set up logging on Microsoft Azure
#### Other
* Use environment variables to avoid disclosing any secret credentials
* Signed up with Github dependabot alerts to receive updates about any security vulnerabilities in our dependencies
* We were in the process of implementing an Authentication service using an open-source service called Supabase to let users sign in using multiple OAuth methods and implement 2FA with TOTP (instead of SMS)
* For all the password fields required for our database and Azure service, we used Bitwarden password generator to generate 20-character random passwords as well as used 2FA with TOTP to login to all services that support it
* Used SSL for all communication between our resources
## Challenges we ran into
* Getting the Google Maps API to work
* Weird errors deploying on Azure
* Spending too much time trying to make CockroachDB work. It seemed to require certificates for connection even for testing. It seemed like their docs for using sequalize with their DB were not updated since this requirement was put into place.
## Accomplishments that we're proud of
Winning the security award by CSE!
## What we learned
We learned to not underestimate the amount of work required and do better planning next time.
Meanwhile, maybe go to fewer activities though they are super fun and engaging! Don't take us wrong as we did not regret doing them! XD
## What's next for Food Share
Food Share is built within a limited time. Some implementations that couldn't be included in time:
* Location of available food on the interactive map
* More filters for the search for available food
* Accounts and authentication method
* Implement Microsoft Azure live chat called Azure Web PubSub
* Cleaner UI
|
## 💡 Inspiration
A common problem among post-secondary students is food. When leaving home for the first time and no longer having your parents to take care of feeding yourself, maintaining a balanced diet and keeping track of freshness can be a chore. By simply scanning the barcode on food items you purchase, add its data straight to your device and view it in your "virtual fridge".
In addition, having been a volunteer for Food for Peace, we noticed that many institutions and individuals have soon to be expiring products that could be used to cook foods for those in needs! So when seeing a product soon to be expiring, notify the charities and food banks around you!
## 🍽️ What it does
* Scan a barcode or upload an image of one to your app, and watch as useful health data is **summarized and visualized** right before your eyes!
* Integrated with the OpenFoodFacts database, PizzaMind will display some **key nutritional facts** in an easy-to-understand way, along with a nutritional score based on the food's overall healthiness.
* Add foods to your shelf to track best before/expiry dates so you can **avoid food waste**.
* **Donate to local food banks** around you if you're unable to finish food!
## ⚙️ How we built it
We initially wrote some backend in Python using Flask framework. We eventually introduced React to introduce a more polished front-end. We integrated our system with Auth0 for user logins, MongoDB for data storage, and set up additional functionality in JavaScript and HTML. We used various Python packages, such as Pillow, and pyzbar, to set up the scanning and basic functionality of our system. We maintained our system and our various developments through a GitHub repository.
### ✨ Stack
* React for frontend
* Bootstrap to make it mobile responsive
* Webfontloader library for the gorgeous fonts
* Mapbox API for showing nearby charities and food banks
* Auth0 for user functionality
## 🥊 Challenges we ran into
While we had some familiarity with some of the frameworks and languages used, we had to learn many new skills and software, such as MongoDB and Flask, in order to get our system to be fully functional. Additionally, the implementation of computer vision in order to scan barcodes and extract data was challenging, as despite the existence of packages within Python and JavaScript that could help with this, cleaning up and storing the data properly proved to be a difficult task.
## 🏆 Accomplishments that we're proud of
Having completed the barcode scanning and uploading system and fully integrating it with MongoDB, which none of us were overly familiar with at the beginning, was very satisfying once completed. We are also happy with the idea to expand our product going forward for social justice, as we had the idea to use this project to assist local food banks (more on that in the What's Next section).
## 📖 What we learned
This project allowed us to gain a lot of insight into the development of major web applications. Though ideally our app would be mobile-based, web-based was easier to figure out in 36 hours and easier to demonstrate live. This major project consisted of all three of us alternating between frontend and backend development as needed, so we learned a lot about the full stack process and how to integrate all the various components of our system. We also learned about various useful, free online sources, such as Auth0 and MongoDB, that can be implemented into our system to aid in critical functionality. These systems, despite making some of our work easier, still took much time, effort, and trial and error to learn enough to use in our project. We also learned about designing user-friendly interfaces and branding aesthetic, as coming up with idea for frontend led us to really dive into what kind of an app we wanted to create and how we could make it both functional and useable.
## 🌎 What's next for PizzaMind
Approximately **2.3 million tonnes of food went to waste in Ontario due to the general public** in 2020 (Gov't of Canada, 2020). We would like to implement a system where users can donate food near expiry to local food banks that are willing to accept it. This would function like a sort of reverse UberEats system, where the user sends out the request, a food bank responds and can send a representative to the user's location to collect the food item(s). This way, further food waste is prevented in the event that a user realizes they are unable to use up a food item before its expiry; this also allows the food to go towards a good cause.
## Sources Cited:
Government of Canada Publications, Government of Canada, 2020. *National Waste Characterization Report: The Composition of Canadian Residual Municipal Solid Waste*. <https://publications.gc.ca/collections/collection_2020/eccc/en14/En14-405-2020-eng.pdf>.
|
partial
|
## Inspiration
Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants.
We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us.
The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term.
## What it does
We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend.
## How we built it
We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud.
## Challenges we ran into
This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering.
## Accomplishments that we're proud of
We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission.
## What we learned
We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp.
## What's next for Purrtector
Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet.
## Domain.com Link
<https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
|
**Recollect** is a $150 robot that scans, digitizes, and analyzes books with zero intervention. Once a book is placed on the stand, Recollect's robotic arm delicately flips the pages while a high-resolution camera captures each page. The images are sent to a website which merges the images into a PDF and creates AI summaries and insights of the document.
## Why build this?
Only 12% of all published books have been digitized. Historical records, ancient manuscripts, rare collections. Family photo albums, old journal entries, science field notes. Without digitization, centuries of accumulated wisdom, cultural treasures, personal narratives, and family histories threaten to be forever lost to time.
Large-scale digitization currently requires highly specialized equipment and physical personnel to manually flip, scan, and process each page. Oftentimes, this is simply not practical, resulting in many books remaining in undigitized form, which necessitates careful, expensive, and unsustainable transportation across various locations for analysis.
## How we built it
*Hardware:*
Recollect was made with easy-to-fabricate material including 3D printed plastic parts, laser-cut acrylic and wood, and cheap, and off-the-shelf electronics. A book rests at a 160-degree angle, optimal to hold the book naturally open while minimizing distortions. The page presser drops onto the book, flattening it to further minimize distortions. After the photo is taken, the page presser is raised, then a two-degree-of-freedom robotic arm flips the page. A lightly adhesive pad attaches to the page, and then one of the joints rotates the page. The second joint separates the page from the adhesive pad, and the arm returns to rest. The scanner was designed to be adaptable to a wide range of books, up to 400mm tall and 250 mm page width, with easy adjustments to the arm joints and range of motion to accommodate for a variety of books.
*Software:*
Image processing:
On the backend, we leverage OpenCV to identify page corners, rescale images, and sharpen colors to produce clear images. These images are processed with pre-trained Google Cloud Vision API models to enable optical character recognition of handwriting and unstructured text. The data are saved into a Supabase database to allow users to access their digital library from anywhere.
Webpage and cloud storage:
The front end is a Vercel-deployed web app built with Bun, Typescript, Chakra, Next.js, and React.js.
## Challenges we ran into
We ran into challenges involving getting the perfect angle for the robotic arm to properly stick to the page. To fix this, we had to modify the pivot point of the arm’s base to be in line with the book’s spine and add a calibration step to make it perfectly set up for the book to be scanned. Our first version also used servo motors with linkages to raise the acrylic page presser up and down, but we realized these motors did not have enough torque. As a result, we replaced them with DC motors and a basic string and pulley system which turned out to work surprisingly well.
## Accomplishments that we're proud of
This project was a perfect blend of each team member’s unique skill sets: Lawton, a mechanical engineering major, Scott, an electrical and systems engineer, Kaien, an AI developer, and Jason, a full-stack developer. Being able to combine our skills in this project was amazing, and we were truly impressed by how much we were able to accomplish in just 24 hours. Seeing this idea turn into a physical reality was insane, and we were able to go beyond what we initially planned on building (such as adding summarization, quotation, and word cloud features as post-processing steps on your diary scans). We’re happy to say that we’ve already digitized over 100 pages of our diaries through testing.
## What we learned
We learned how to effectively divide up the project into several tasks and assign it based on area of expertise. We also learned to parallelize our work—while parts were being 3D-printed, we would focus on software, design, and electronics.
## What's next for Recollect
We plan to improve the reliability of our system to work with all types of diaries, books, and notebooks, no matter how stiff or large the pages are. We also want to focus on recreating PDFs from these books in a fully digital format (i.e. not just the images arranged in a PDF document but actual text boxes following the formatting of the original document). We also plan to release all of the specifications and software publicly so that anyone can build their own Recollect scanner at home to scan their own diaries and family books. We will design parts kits to make this process even easier. We will also explore collaborating with Stanford libraries and our close communities (friends and family). Thanks to Recollect, we hope no book is left behind.
|
## Source Code
<https://github.com/khou22/SoFly-Scanner>
## Inspiration
The motivation for our application came when we realized how much our college uses flyers to advertise events. From dance recitals, to scientific talks, events are neatly summarized and hung on campus in visible areas. A huge part of our sense of community comes from these events, and as excursions into Princeton township have shown us, events planned in non-centralized communities rely on flyers and other written media to communicate activities to others.
Both of us have fond memories attending community events growing up, and we think (through some surveying of our student body) that a cause of decreased attendance at such events is due to a few factors. (1) People forget. Its not a flyer they can always take with them, and so what they think is instantaneously exciting soon fades from their memory. (2) It is not digital – in a world where everything else is.
## What it does
Our vision is an application that can allow a user to snap a picture of a flyer, and have their phone extract relevant information, and make a calendar event based on that. This will allow users to digitize flyers, and hopefully provide a decentralized mechanism for communities to grow close again.
## How I built it
Our application uses Optical Character Recognition techniques (along with Otsu’s method to preprocess a picture, and an exposure and alignment adjustment algorithm) to extract a dump of recognized text. This text is error prone, and quite messy, and so we use canonical Natural Language Processing algorithms to tokenize the text, and “learn” which terms are important. The Machine Learning component in this project involves a Naive Bayesian Classifier, which can categorize and weight these terms for (as of now) internal use. This compared with a “loose NFA” implementation (we coined the term to describe an overly general regex with multiple matches) whose matches were processed using an algorithm that determined the most probable match. From the flyers we extract date, time, location, and our best guess at the title of the text. We made a design choice to limit the time our OCR took, which leads to worse holistic text recognition, but still allows us to extract theses fields using our NLP methods.
## Challenges I ran into
There were a ton of challenges working on this. Optical Character Recognition, Machine Learning, and Natural Language Processing are all open fields of research, and our project drew on all of them. The debugging process was brutal, especially since given their nature, bad input would mess up even the best written code.
## Accomplishments that I'm proud of
This was an ambitious project, and we brought it to the finish line. We are both very proud of that.
## What I learned
Team chemistry is a huge part of success. If my partner and I were any less compatible this would definitely not have worked.
## What's next for SoFly Scanner
We need more time to train our ML algorithm, and want to give it a wider geographic functionality. Additionally, we want to incorporate more complex image processing techniques to give our OCR the best chance of success!
|
partial
|
## Inspiration
In the status Quo, the children of our societies have been facing challenges that were never thought of as a problem in the first place. Whether that be vulnerable kids taken advantage of through the internet, cyberbullying with no reprecuations, low self-esteem due to lower social interactions, or the shortage of good mentors who can help them without judging them at every issue. While the solution that everyone talks about is educating the children, it is not as easy or approachable as it seems.
## What it does
**Entertainment through education**, not education through entertainment. Our team strongly believes that learning can be a lifelong motto if it is done in the correct manner. Many of the educational games we see today can be quite frustrating and unimaginative at times. Furthermore, some of the lessons the children need to have today can be quite unentertaining to them. I mean, who here thinks that Cybersecurity is a great leisuretime topic for kids?
We created a Virtual Reality game that is quite fun to play for the sake of entertainment. On the other positive side, it can teach them many great lessons as they go through the game. The main VR game that we made for this hackathon consists of two parts:
(a) A wild west showdown where you are facing a fierce opponent and you have to slash through all his ammos that are aimed at you. It is a quite fast-paced game and reaction time is challenged now and then. As the players slash through the boxes, some hints will drop randmoly for that. The hints might not mean much at that time, but they can very useful for the game that is coming next!
For special kids, we considered different types of level to cater to their needs. Based on how well they initially, the opponent will get harder or easier as the time progress. For VR games, understading depth can be a difficult issue for many. We figured out that can be solved by making the boxes/shells glow when they are within the slashing range for the player, using sounds that get sharper as the object gets closer to the hitting range, or simply slowing down the objects.
(b) The second game is something like the glass-crossing level of Squid games. Each player has to cross 10 different levels of glasses. On each level, they will be asked a question with a given timer. If they can asnwer the questions within time, they can forward to the next glass level. If they fail, the glass breaks and they fail the game. The hints from the previous game can be quite useful to answer the questions in this level. The interesting part is, the questions asked here are mostly to teach them about useful topics. For this hackathon, we are focusing more on cybersecurity and how children can keep themselves safe. But the questions can be changed for many different niche.
## How we built it
The Virtual Reality (VR) game is built using the Unity game engine. The backend is programmed through C#. Some of the 3D models were designed using blender.
## Challenges we ran into
(1) The main challenge we faced was the shortage of skilled individuals who are either proficient or interested to work in the VR side. With that being, three of us strangers managed to come together looking at the good that this project can bring, and we had a wonderful time learning and implementing different aspects of VR development. The team dedication was really something worthwhile, and I, Azwad, as the team lead, am proud of my team. Whether they knew about the a concept or not, both of my teammates were eager to contribute and figure out how to get the job done.
(2) Time is of the essence. After coming all the way from Waterloo, in the past two days, I slept a total of two hours, and I am craving to do more for the project! Sure, trying to make two well thought VR game scenes can be quite a big task to aim for, but then again, if we don't aim big, can we really go anywhere? I always loved the term, "The Greater Fool". And it is true, this world is built by people who are foolish enough to think that they can change the world while the whole world thinks there is no hope.
## Accomplishments that we're proud of
## What we learned
## What's next for Untitled
|
## Inspiration
In today's busy world, we recognized a crucial need: leveraging technology to empower people in managing their nutrition effectively and efficiently, especially for homemade meals. Our health is our most valuable asset, yet it's often challenging to keep track of what we eat, particularly when we're cooking at home, eating at a restaurant, and/or when don't have access to nutritional labels.
We set out to solve two common problems faced by many, especially fitness enthusiasts and home cooks:
1-Accurately tracking the nutritional content of homemade meals quickly and conveniently
2-Understanding proper portion sizes without the hassle of weighing every ingredient
Calculating macros and estimating portions for every meal can be time-consuming and impractical. That's where FoodSense comes in—an innovative web app designed for speed and convenience. It scans your food, provides an instant nutritional breakdown, and helps you understand appropriate portion sizes, all without interrupting your routine. What sets FoodSense apart is its convenience. Being a web app, you can simply open your laptop or desktop computer and scan what you're eating without needing to get up or reach for your phone. This is perfect for those working from home or having meals at their desk.
## What it does
FoodSense uses an OpenAI Vision Model and your phone or webcam to scan the food you're preparing, providing real-time feedback on the macronutrients (protein, carbs, fats) and caloric content. It warns you if you're exceeding your macros and helps you stay on track with your fitness goals. After each scan, you receive personalized feedback. The app has the potential to be a complete fitness planner and tracker powered by AI.
## How we built it
We built FoodSense using a Next.js frontend connected to a custom backend through WebSocket technology, prioritizing user experience and performance. The frontend captures a video stream from the user’s device and sends it to the backend. From there, the backend intelligently selects and processes specific frames before passing them to an OpenAI Vision Model for analysis. Once the model processes the frames, it returns a JSON response with detailed information on macronutrients and calories, which is then displayed in real-time on the frontend. To further enhance the user experience, we added a feature where, upon completing a scan, users can receive personalized guidance from their AI coach, providing detailed ingredient information and suggestions based on what they’ve consumed.
## Challenges we ran into
Integrating a performant WebSocket video stream connection was challenging at first. We encountered difficulties with parallel processing and thread management of the video streams, but through persistent problem-solving, we were able to resolve these issues. A major challenge was ensuring the Vision model could accurately recognize various foods and ingredients under different lighting conditions and angles. Additionally, designing a native app-like interface for the web platform across different browsers, while ensuring smooth access to the camera on multiple devices, proved difficult. However, we overcame these obstacles, and the end result is a cross-platform solution that combines the benefits of accessibility with a user-friendly design.
## Accomplishments that we're proud of
We're incredibly proud of getting the app to function smoothly within the tight constraints of the hackathon. The real-time macro detection works with great performance, and we’re excited about how quickly we were able to integrate various technologies to create a seamless user experience. We're also proud of our teamwork—despite the fact that all team members met on Slack without prior personal connections, we quickly adapted to each other’s strengths and work styles, allowing us to collaborate effectively and bring the project to life.
## What we learned
We learned a great deal about building fast, user-friendly apps while improving our understanding of networking, WebSocket technology, and parallel processing. Integrating AI through vision models gave us valuable experience in applying advanced technology to real-world scenarios. We also honed our teamwork and communication skills, quickly adapting to each other's strengths and collaborating efficiently, despite having just met online. Additionally, we focused on balancing functionality with user experience, ensuring the app is both efficient and intuitive. Hosting and deploying real-time applications further enhanced our ability to manage performance across platforms.
## What's next for FoodSense
We believe that FoodSense has the potential to be a game changer. The platform can be greatly expanded to include tracking features and provide valuable insights into how society consumes food. The vast number of images captured by the app can also contribute to improving machine vision models through continuous training. We genuinely believe that with a few more weeks of work, this product is shipable and could have a significant impact on the world, especially in making calorie counting more accurate and accessible.
Given its access to real-time video, FoodSense could even evolve to assist with creating recipes and guiding users through the cooking process with its machine vision capabilities. While we successfully delivered the core functionality during the hackathon, future improvements could include offering personalized meal plans and generating weekly nutrition summaries to help users stay on track with their fitness goals.
|
## Inspiration
The expense behavior of the user, especially in the age group of 15-29, is towards spending unreasonably amount in unnecessary stuff. So we want them to have a better financial life, and help them understand their expenses better, and guide them towards investing that money into stocks instead.
## What it does
It points out the unnecessary expenses of the user, and suggests what if you invest this in the stocks what amount of income you could gather around in time.
So, basically the app shows you two kinds of investment grounds:
1. what if you invested somewhere around 6 months back then what amount of money you could have earned now.
2. The app also shows what the most favorable companies to invest at the moment based on the Warren Buffet Model.
## How we built it
We basically had a python script that scrapes the web and analyzes the Stock market and suggests the user the most potential companies to invest based on the Warren Buffet model.
## Challenges we ran into
Initially the web scraping was hard, we tried multiple ways and different automation software to get the details, but some how we are not able to incorporate fully. So we had to write the web scrapper code completely by ourselves and set various parameters to short list the companies for the Investment.
## Accomplishments that we're proud of
We are able to come up with an good idea of helping people to have a financially better life.
We have learnt so many things on spot and somehow made them work for satisfactory results. but i think there is many more ways to make this effective.
## What we learned
We learnt firebase, also we learnt how to scrape data from a complex structural sites.
Since, we are just a team of three new members who just formed at the hackathon, we had to learn and co-operate with each other.
## What's next for Revenue Now
We can study our user and his behavior towards spending money, and have customized profiles that suits him and guides him for the best use of financial income and suggests the various saving patterns and investment patterns to make even the user comfortable.
|
losing
|
## 💡 Inspiration 💡
Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player!
This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors.
Ultimately, our project makes music more inclusive and brings people together through shared experiences.
## ❓What it does ❓
Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life!
## ⚙️ How we built it ⚙️
For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively.
## Challenges we ran into ⚔️
We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to
## Accomplishments that we're proud of 🏆
* Got a working robot to read and play piano music!
* File transfer working via SSH
* Conversion from MIDI to key presses mapped to fingers
* Piano playing melody ablities!
## What we learned 📚
* Working with Raspberry Pi 3 and its libraries for servo motors and additional components
* Working with OpenCV and fine tuning models for reading sheet music
* SSH protocols and just general networking concepts for transferring files
* Parsing MIDI files into useful data through some really cool Python libraries
## What's next for Ludwig 🤔
* MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves.
* Improved photo recognition for reading accents and BPM
* Realistic fingers via 3D printing
|
# Inspiration
Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO.
# What It Does
To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works:
* **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO.
* **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals.
-**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits.
* **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO.
# How We Built It
### Backend:
* **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO.
* **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded.
* **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently.
* **Node.js** to handle API integrations and the interface between the blockchain and our frontend.
* **Sepolia** where the smart contracts have been deployed and connected to the web application.
### Frontend:
* **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting.
* **React** and **Next.js** for building an intuitive, responsive user interface.
* **TypeScript** for type safety and better maintainability.
* **TailwindCSS** for rapid, visually appealing design.
* **Shadcn UI** for accessible and consistent component design.
# Challenges We Faced, Solutions, and Learning
### Challenge 1 - Creating a Unique Concept:
Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented.
**Solution**:
After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out.
### Challenge 2 - DAO Governance:
Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily.
**Solution**:
We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics.
## Accomplishments that we're proud of
* **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors.
* **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests.
* **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive.
## What we learned
* **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits.
* **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible.
* **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles.
# What’s Next for DAFP
Looking ahead, we plan to:
* **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups.
* **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve).
* **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance.
|
## Inspiration
We wanted to promote an easy learning system to introduce verbal individuals to the basics of American Sign Language. Often people in the non-verbal community are restricted by the lack of understanding outside of the community. Our team wants to break down these barriers and create a fun, interactive, and visual environment for users. In addition, our team wanted to replicate a 3D model of how to position the hand as videos often do not convey sufficient information.
## What it does
**Step 1** Create a Machine Learning Model To Interpret the Hand Gestures
This step provides the foundation for the project. Using OpenCV, our team was able to create datasets for each of the ASL alphabet hand positions. Based on the model trained using Tensorflow and Google Cloud Storage, a video datastream is started, interpreted and the letter is identified.
**Step 2** 3D Model of the Hand
The Arduino UNO starts a series of servo motors to activate the 3D hand model. The user can input the desired letter and the 3D printed robotic hand can then interpret this (using the model from step 1) to display the desired hand position. Data is transferred through the SPI Bus and is powered by a 9V battery for ease of transportation.
## How I built it
Languages: Python, C++
Platforms: TensorFlow, Fusion 360, OpenCV, UiPath
Hardware: 4 servo motors, Arduino UNO
Parts: 3D-printed
## Challenges I ran into
1. Raspberry Pi Camera would overheat and not connect leading us to remove the Telus IoT connectivity from our final project
2. Issues with incompatibilities with Mac and OpenCV and UiPath
3. Issues with lighting and lack of variety in training data leading to less accurate results.
## Accomplishments that I'm proud of
* Able to design and integrate the hardware with software and apply it to a mechanical application.
* Create data, train and deploy a working machine learning model
## What I learned
How to integrate simple low resource hardware systems with complex Machine Learning Algorithms.
## What's next for ASL Hand Bot
* expand beyond letters into words
* create a more dynamic user interface
* expand the dataset and models to incorporate more
|
winning
|
# Flash Computer Vision®
### Computer Vision for the World
Github: <https://github.com/AidanAbd/MA-3>
Try it Out: <http://flash-cv.com>
## Inspiration
Over the last century, computers have gained superhuman capabilities in computer vision. Unfortunately, these capabilities are not yet empowering everyday people because building an image classifier is still a fairly complicated task.
The easiest tools that currently exist still require a good amount of computing knowledge. A good example is [Google AutoML: Vision](https://cloud.google.com/vision/automl/docs/quickstart) which is regarded as the "simplest" solution and yet requires an extensive knowledge of web skills and some coding ability. We are determined to change that.
We were inspired by talking to farmers in Mexico who wanted to identify ready / diseased crops easily without having to train many workers. Despite the technology existing, their walk of life had not lent them the opportunity to do so. We were also inspired by people in developing countries who want access to the frontier of technology but lack the education to unlock it. While we explored an aspect of computer vision, we are interested in giving individuals the power to use all sorts of ML technologies and believe similar frameworks could be set up for natural language processing as well.
## The product: Flash Computer Vision
### Easy to use Image Classification Builder - The Front-end
Flash Magic is a website with an extremely simple interface. It has one functionality: it takes in a variable amount of image folders and gives back an image classification interface. Once the user uploads the image folders, he simply clicks the Magic Flash™ button. There is a short training process during which the website displays a progress bar. The website then returns an image classifier and a simplistic interface for using it. The user can use the interface (which is directly built on the website) to upload and classify new images. The user never has to worry about any of the “magic” that goes on in the backend.
### Magic Flash™ - The Backend
The front end’s connection to the backend sets up a Train image folder on the server. The name of each folder is the category that the pictures inside of it belong to. The backend will take the folder and transfer it into a CSV file. From this CSV file it creates a [Pytorch.utils.Dataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html) Class by inheriting Dataset and overriding three of its methods. When the dataset is ready, the data is augmented using a combination of 6 different augmentations: CenterCrop, ColorJitter, RandomAffine, RandomRotation, and Normalize. By doing those transformations, we ~10x the amount of data that we have for training. Once the data is in a CSV file and has been augmented, we are ready for training.
We import [SqueezeNet](https://pytorch.org/hub/pytorch_vision_squeezenet/) and use transfer learning to adjust it to our categories. What this means is that we erase the last layer of the original net, which was originally trained for ImageNet (1000 categories), and initialize a layer of size equal to the number of categories that the user defined, making sure to accurately match dimensions. We then run back-propagation on the network with all the weights “frozen” in place with exception to the ones in the last layer. As the model is training, it is informing the front-end that progress being made, allowing us to display a progress bar. Once the model converges, the final model is saved into a file that the API can easily call for inference (classification) when the user asks for prediction on new data.
## How we built it
The website was built using Node.js and javascript and it has three main functionalities: allowing the user to upload pictures into folders, sending the picture folders to the backend, making API calls to classify new images once the model is ready.
The backend is built in Python and PyTorch. On top of Python we used torch, torch.nn as nn, torch.optim as optim, torch.optim import lr\_scheduler, numpy as np, torchvision, from torchvision import datasets, models, and transforms, import matplotlib.pyplot as plt, import time, import os, import copy, import json, import sys.
## Accomplishments that we're proud of
Going into this, we were not sure if 36 hours were enough to build this product. The proudest moment of this project was the successful testing round at 1am on Sunday. While we had some machine learning experience on our team, none of us had experience with transfer learning or this sort of web application. At the beginning of our project, we sat down, learned about each task, and then drew a diagram of our project. We are especially proud of this step because coming into the coding portion with a clear API functionality understanding and delegated tasks saved us a lot of time and helped us integrate the final product.
## Obstacles we overcame and what we learned
Machine learning models are often finicky in their training patterns. Because our application allows users with little experience, we had to come up with a robust training pipeline. Designing this pipeline took a lot of thought and a few failures before we converged on a few data augmentation techniques that do the trick. After this hurdle, integrating a deep learning backend with the website interface was quite challenging, as the training pipeline requires very specific labels and file structure. Iterating the website to reflect the rigid protocol without over complicating the interface was thus a challenge. We learned a ton over this weekend. Firstly, getting the transfer learning to work was enlightening as freezing parts of the network and writing a functional training loop for specific layers required diving deep into the pytorch API. Secondly, the human-design aspect was a really interesting learning opportunity as we had to come up with the right line between abstraction and functionality. Finally, and perhaps most importantly, constantly meeting and syncing code taught us the importance of keeping a team on the same page at all times.
## What's next for Flash Computer Vision
### Application companion + Machine Learning on the Edge
We want to build a companion app with the same functionality as the website. The companion app would be even more powerful than the website because it would have the ability to quantize models (compression for ml) and to transfer them into TensorFlow Lite so that **models can be stored and used within the device.** This would especially benefit people in developing countries, where they sometimes cannot depend on having a cellular connection.
### Charge to use
We want to make a payment system within the website so that we can scale without worrying about computational cost. We do not want to make a business model out of charging per API call, as we do not believe this will pave a path forward for rapid adoption. **We want users to own their models, this will be our competitive differentiator.** We intentionally used a smaller neural network to reduce hosting and decrease inference time. Once we compress our already small models, this vision can be fully realized as we will not have to host anything, but rather return a mobile application.
|
## Inspiration
The whiteboard or chalkboard is an essential tool in instructional settings - to learn better, students need a way to directly transport code from a non-text medium to a more workable environment.
## What it does
Enables someone to take a picture of handwritten or printed text converts it directly to code or text on your favorite text editor on your computer.
## How we built it
On the front end, we built an app using Ionic/Cordova so the user could take a picture of their code. Behind the scenes, using JavaScript, our software harnesses the power of the Google Cloud Vision API to perform intelligent character recognition (ICR) of handwritten words. Following that, we applied our own formatting algorithms to prettify the code. Finally, our server sends the formatted code to the desired computer, which opens it with the appropriate file extension in your favorite IDE. In addition, the client handles all scripting of minimization and fileOS.
## Challenges we ran into
The vision API is trained on text with correct grammar and punctuation. This makes recognition of code quite difficult, especially indentation and camel case. We were able to overcome this issue with some clever algorithms. Also, despite a general lack of JavaScript knowledge, we were able to make good use of documentation to solve our issues.
## Accomplishments that we're proud of
A beautiful spacing algorithm that recursively categorizes lines into indentation levels.
Getting the app to talk to the main server to talk to the target computer.
Scripting the client to display final result in a matter of seconds.
## What we learned
How to integrate and use the Google Cloud Vision API.
How to build and communicate across servers in JavaScript.
How to interact with native functions of a phone.
## What's next for Codify
It's feasibly to increase accuracy by using the Levenshtein distance between words. In addition, we can improve algorithms to work well with code. Finally, we can add image preprocessing (heighten image contrast, rotate accordingly) to make it more readable to the vision API.
|
## Inspiration
Many of us, including our peers, struggle on deciding what to cook. We usually have a fridge full of items but are not sure what exactly to make with those items. This leads us to eating outside or buying even more groceries to follow along to a recipe.
* We want to be able to use what we have
* Reduce our waste
* Get new and easy ideas
## What it does
The user first takes a picture of the items in their fridge. They can then upload the image to our application. Using computer vision technology, we detect what are the exact items present in the picture (their fridge). After obtaining a list of the ingredients the user in their fridge this data is then passed along and processed with a database of 1000 quick and easy recipes.
## How we built it
* We designed the mobile and desktop website using Figma
* The website was developed using JavaScript and node.js
* We use Google Cloud Vision API to detect items in the picture
* This list of items in then processed along a database of recipes
* Best matching recipes are returned to the user
## Challenges we ran into
We ran through a lot of difficulties and challenges while building this web app most of which we were able to overcome with help from each other and learning on the fly.
The first challenge we ran into was building and training a machine learning model to apply multi-class object detection on the images the user inputs. This is tricky as there is no proper dataset of images of vegetables, fruits, meats, condiments, other items all together. After various experiments on our own machine learning models from scratch we then attempted using multiple pre-existing models and tools for our case. We found Google Cloud Vision API was doing the best job out of all that was available. Thus, we invested in Google Vision and using their API for our prototype currently.
The second challenge was getting the correct recipes according to the data received from the artificial intelligence. We are using a database of 1000 recipes and set a threshold for the minimum of number of items needed to match (ingredients the user has - to - ingredients the recipe requires). Our assumption is the user already has the basic ingredients such as salt, pepper, salt, butter, oil, etc.
## Accomplishments that we're proud of
* Coming up with an idea that solves a problem every member of our team and many peers we interviewed face
* Using modern artificial intelligence to solve a major part of our problem (detecting ingredients/groceries) from a given image
* Designing a a very good looking and user-friendly UI with an excellent user-experience (quick and easy)
## What we learned
Each team member learned a new or enhanced a current skill during this hackathon which is what we were here for. We learned to use newer tools, such as google cloud, figma, others to streamline our product development.
## What's next for Xcellent Recipes
\**We truly believe in our product and its usefulness for customers. We will continue working on Xcellent Recipes with a product launch in the future. The next steps include: \**
1. Establishing a backend server
2. Create or obtain our own data for training a ML model for our use case
3. Fine tune recipes
4. Company Launch
|
winning
|
# Inspiration
I recently got attached to Beat Saber so I thought I'd be fun to build something similar to it.
# Objective
The objective of the game is to score higher than your opponent. Points are scored if a player triggers a hitbox when a note is in contact with it.
**Points Chart:**
**Green:** Perfect hit! The hitbox was triggered when a note was in full contact, **full points + combo bonus**
**Yellow:** Hitbox was triggered when a note was in partial contact, **partial points**
**Red:** Hitbox was triggered when a note was not in contact, **no points**
**Combo (Bonus Points):**
Combos are achieved when a hitbox triggered **Green** more than once in a row. Combos add a great amount of bonus to your score and progressively increase in value as the pace of the notes progress.
# Controls & Info
**HitBox:** The blue circles at the bottom of each player's half of the screen
**Notes:** The orange circles that fall from the top of the screen down to the hotboxes
**Player 1 (Left Side):**
Key "A": Triggers the left hitbox
Key "S": Triggers the center hitbox
Key "D": Triggers the right hitbox
**Player 2 (Right Side):**
Key "J: Triggers the left hitbox
Key "K": Triggers the center hitbox
Key "L": Triggers the right hitbox
# What's next for Rhythm Flow
1. Support for tablets. The game is very much playable on the computer but the mechanics of it can also be ported to tablets where the touch screen size is sufficient enough to use controls.
2. More game modes. Currently, there is only one game mode where two people are directly competing against each other. I have ideas for other games modes where instead of competing, two players would have collaborate together to beat the round.
|
## Inspiration
The Riff Off idea comes from the movie series Pitch Perfect. Our game works similar to the Riff Off in the movie, except players select the songs from our song bank and play from there to earn points instead of sing.
## What it does
It is a multiplayer mobile application that works on both iOS and Android. It allows players to compete by selecting a song that matches the beat of the previous song and earn score. Players can join the same session by the use of QR codes. Then, the game requires players to constantly change songs that have a similar BPM as the last one being played to earn points. The longer a song stays up, the more points that player earns.
## How we built it
We used ionic with an express + mongo backend hosted on an EC2 instance.
## Challenges we ran into
We ran into way too many challenges. One of the major issues we still have is that android phones are having issues opening up the game page. It worked until the last couple of hours. Also, having multiple devices play the song at the same time was challenging. Also, generating score and syncing it across all players' devices was not easy.
## Accomplishments that we're proud of
* It's pretty
* It doesn't crash like 60% of the time
* As a team of mostly newish hackers we actually finished!!
* Did we mention it's pretty?
## What we learned
For most of our team members it is our first time using ionic. This allowed us to learn many new things like coding in typescript.
## What's next for Beat
Get Android to work seamlessly. There remain some minor styling and integration issues. Also, in our initial planning, points are given for the matching of lyrics on coming in. We did not have enough time to implement that, so our score is currently only generated by time and BPM. The next step would be to include more ways to generate the score to make a more accurate point system. A final detail we can add is that currently the game does not end. We can implement a set amount of time for each game, or allow the players to determine that.
|
## Inspiration
We aren't musicians. We can't dance. With AirTunes, we can try to do both! Superheroes are also pretty cool.
## What it does
AirTunes recognizes 10 different popular dance moves (at any given moment) and generates a corresponding sound. The sounds can be looped and added at various times to create an original song with simple gestures.
The user can choose to be one of four different superheroes (Hulk, Superman, Batman, Mr. Incredible) and record their piece with their own personal touch.
## How we built it
In our first attempt, we used OpenCV to maps the arms and face of the user and measure the angles between the body parts to map to a dance move. Although successful with a few gestures, more complex gestures like the "shoot" were not ideal for this method. We ended up training a convolutional neural network in Tensorflow with 1000 samples of each gesture, which worked better. The model works with 98% accuracy on the test data set.
We designed the UI using the kivy library in python. There, we added record functionality, the ability to choose the music and the superhero overlay, which was done with the use of rlib and opencv to detect facial features and map a static image over these features.
## Challenges we ran into
We came in with a completely different idea for the Hack for Resistance Route, and we spent the first day basically working on that until we realized that it was not interesting enough for us to sacrifice our cherished sleep. We abandoned the idea and started experimenting with LeapMotion, which was also unsuccessful because of its limited range. And so, the biggest challenge we faced was time.
It was also tricky to figure out the contour settings and get them 'just right'. To maintain a consistent environment, we even went down to CVS and bought a shower curtain for a plain white background. Afterward, we realized we could have just added a few sliders to adjust the settings based on whatever environment we were in.
## Accomplishments that we're proud of
It was one of our first experiences training an ML model for image recognition and it's a lot more accurate than we had even expected.
## What we learned
All four of us worked with unfamiliar technologies for the majority of the hack, so we each got to learn something new!
## What's next for AirTunes
The biggest feature we see in the future for AirTunes is the ability to add your own gestures. We would also like to create a web app as opposed to a local application and add more customization.
|
partial
|
## Inspiration
Parker was riding his bike down Commonwealth Avenue on his way to work this summer when a car pulled out of nowhere and hit his front tire. Lucky, he wasn't hurt, he saw his life flash before his eyes in that moment and it really left an impression on him. (His bike made it out okay as well, other than a bit of tire misalignment!)
As bikes become more and more ubiquitous as a mode of transportation in big cities with the growth of rental services and bike lanes, bike safety is more prevalent than ever.
## What it does
We designed \_ Bikeable \_ - a Boston directions app for bicyclists that uses machine learning to generate directions for users based on prior bike accidents in police reports. You simply enter your origin and destination, and Bikeable creates a path for you to follow that balances efficiency with safety. While it's comforting to know that you're on a safe path, we also incorporated heat maps, so you can see where the hotspots of bicycle theft and accidents occur, so you can be more well-informed in the future!
## How we built it
Bikeable is built in Google Cloud Platform's App Engine (GAE) and utilizes the best features three mapping apis: Google Maps, Here.com, and Leaflet to deliver directions in one seamless experience. Being build in GAE, Flask served as a solid bridge between a Python backend with machine learning algorithms and a HTML/JS frontend. Domain.com allowed us to get a cool domain name for our site and GCP allowed us to connect many small features quickly as well as host our database.
## Challenges we ran into
We ran into several challenges.
Right off the bat we were incredibly productive, and got a snappy UI up and running immediately through the accessible Google Maps API. We were off to an incredible start, but soon realized that the only effective way to best account for safety while maintaining maximum efficiency in travel time would be by highlighting clusters to steer waypoints away from. We realized that the Google Maps API would not be ideal for the ML in the back-end, simply because our avoidance algorithm did not work well with how the API is set up. We then decided on the HERE Maps API because of its unique ability to avoid areas in the algorithm. Once the front end for HERE Maps was developed, we soon attempted to deploy to Flask, only to find that JQuery somehow hindered our ability to view the physical map on our website. After hours of working through App Engine and Flask, we found a third map API/JS library called Leaflet that had much of the visual features we wanted. We ended up combining the best components of all three APIs to develop Bikeable over the past two days.
The second large challenge we ran into was the Cross-Origin Resource Sharing (CORS) errors that seemed to never end. In the final stretch of the hackathon we were getting ready to link our front and back end with json files, but we kept getting blocked by the CORS errors. After several hours of troubleshooting we realized our mistake of crossing through localhost and public domain, and kept deploying to test rather than running locally through flask.
## Accomplishments that we're proud of
We are incredibly proud of two things in particular.
Primarily, all of us worked on technologies and languages we had never touched before. This was an insanely productive hackathon, in that we honestly got to experience things that we never would have the confidence to even consider if we were not in such an environment. We're proud that we all stepped out of our comfort zone and developed something worthy of a pin on github.
We also were pretty impressed with what we were able to accomplish in the 36 hours. We set up multiple front ends, developed a full ML model complete with incredible data visualizations, and hosted on multiple different services. We also did not all know each other and the team chemistry that we had off the bat was astounding given that fact!
## What we learned
We learned BigQuery, NumPy, Scikit-learn, Google App Engine, Firebase, and Flask.
## What's next for Bikeable
Stay tuned! Or invest in us that works too :)
**Features that are to be implemented shortly and fairly easily given the current framework:**
* User reported incidents - like Waze for safe biking!
* Bike parking recommendations based on theft reports
* Large altitude increase avoidance to balance comfort with safety and efficiency.
|
## 🌱 Inspiration
Bicycles and e-bikes are quickly outpacing sales of EV in recent years. However, the infrastructure for bikes is very behind compared to that of motor vehicles. This has lead to a high number of accidents among cyclists, who must often share the road with cars. In fact, over 130,000 cyclists are injured in road accidents every year in the US.
If we want to build better cities, we need to make our infrastructure available for everyone—bikes are a great way to incentivize lawmakers and legislators to pursue densification projects & more economically viable alternatives to typical urban sprawl.
## ⚒ What it does
Tandem is a hardware extension for bicycles that aims to improve safety for cyclists in cities. We use a rear-mounted camera to track vehicles and alert riders of potentially unsafe situations via the handlebar-mounted touchscreen display. These include when vehicles are approaching quickly from behind, or if a vehicle is in the cyclist's blind spot.
Further, cyclists are able to report accidents along their route which are then tracked on a publicly-accessible map. This valuable location data can be used by cities to prioritize areas for better cycling and pedestrian infrastructure.
## 📸 How we built it
The camera is a Luxonis OAK-D with an on-chip computer vision processor and two spatial cameras for depth perception. We used a modified open-source model from OpenVINO to recognize vehicles. The camera and model results are sent to the Raspberry Pi, which uses Flask and on-device machine learning to stream it to our web frontend built in Next.js hosted on Vercel.
We developed weighted algorithms based on the size and locations of the bounding boxes to offer blind spot warnings and score the overall safety of the cyclist's current environment. To communicate these warnings to cyclists, we use Flask to send real-time data over an open WebSocket connection to our dashboard. Finally, the accident location database is stored on CockroachDB and is manipulated by an Express server running on Heroku.
## ☕️ Challenges we ran into
One of the greatest and most persistent challenges of the build was the slow processing speeds of the Raspberry Pi 3 board. This was exacerbated by our power source, which was a portable power bank that output slightly under 5V power. Unfortunately, because of this, the Pi behaved inconsistently and lagged for long periods of time while booting, loading the camera, and activating the browser (up to 3 minutes).
At one point, we had also strapped a portable speaker on the bike to enable sound cues based on hazards in the cyclist’s environment. However, we found that the speaker would play sounds repeatedly and would become more of a nuisance than a boon to cyclists. We ultimately ended up scrapping it.
The algorithms we used to determine blind spot presence and risk scores based on bounding boxes were also a challenge to fine tune. In the beginning, there would often be many false positives where a blind spot warning was issued but the detected vehicle was still far behind the cyclist. Or, the algorithm would report an unsafe cycling environment while there were very few vehicles on the road. By trial and error, we managed to find threshold values for these algorithms that give reasonably consistent results.
## Accomplishments that we're proud of
We are very proud of our efficiency and cohesiveness as a team. Each member was responsible for a unique aspect of the build, ranging from hardware integration to CV processing to geolocation, and executed well. We were able to achieve 90%+ success rates of car detection, and decentralized our location & incident reporting facilities using CockroachDB—and even managed to get a few friends in Toronto to try it out using similar hardware sets!
Although many of the technologies we used were unfamiliar, we still managed to complete the project in under 36 hours; complete with CV, location tracking/services, NLP letter generation, classification, post-incident reporting, and dozens of other features..
We also think the bike looks pretty cool.
## What we learned
This was the first hardware hack for every member of our team. From connecting wires without interfering with the bicycle spokes to janky hot glued components, it is a completely different experience from writing software.
We also learned a variety of new Web APIs, including geolocation and WebSockets. Surprisingly, performance was also now much more important, with the limited compute resources of the Pi. Thus, we could not afford to have large bundle sizes or unoptimized code.
## What's next for Tandem
The OAK-D camera also provides two cameras for depth perception and spatial analysis, which could be used to give distances to detected vehicles. We would like to incorporate this data into our safety scoring algorithms to improve accuracy.
Moreover, we plan to expand the accident response map to build safer, more sustainable cities.
|
## Inspiration
With the newfound prevalence of electric vehicles and the urgency of the earth's climate state, it has become more important to increase the accessibility and ease of adoption of environmentally friendly alternatives.
## What it does
This web application finds the user's current location and pinpoints it on an embedded Google Map. Then, using the user's location, the three closest electric vehicle charging locations are displayed.
## How we built it
Our front-end was created with HTML/CSS and Javascript. This was connected with a Python Flask backend. We also integrated the HTML Geolocation API, the NREL (National Renewable Energy Laboratory) developer network API, and the Google Maps Geocoding API.
## Challenges we ran into
Initially, we encountered the issue of being able to dynamically change the marker on the embedded Google Maps iFrame - we ran into a CORS error when trying to reload the map. We realized that we had to use .replaceChild instead. Next, we had some challenges with CSS styling, specifically using flexbox and being able to actualize the ideas and formatting that we had. Next, we had some limitations with the APIs that were accessible to us - hence, the data we could use in our calculations and product offerings were limited.
## Accomplishments that we're proud of
We're very proud of how we constantly adapted despite the challenges we encountered and the limitations in store for us. We initially intended on developing an application to find the nearest gas station and the prices of the fuel. However, the price information was unavailable on any API due to the constantly changing nature, and we had further API limitations with non-electric vehicle charging stations. So, we adapted and took this as an opportunity for innovation - we discovered a more valuable problem space. We're also proud of fully completing our project and accomplishing what we set off to complete.
## What we learned
We gained exposure and knowledge in the Python Flask web framework, creating a full-stack application, effectively styling using CSS, collaborating and building strong work relationships with new teammates, HTTP requests, and integrating APIs.
## What's next for chargedUp
* [ ] Plotting the three closest electric vehicle charging locations on the embedded Google Map, allowing users to get directions straight to the location
* [ ] Including the prices of charging, and other further information in the three boxes
* [ ] Including more than three locations on the list, and allowing for sorting based on the user's preferred information
* [ ] Further accessibility considerations in terms of UI/UX design, including tooltips, instructions, and other improvements
* [ ] Expanding to offer the same service for gas stations, or other vehicle-related services
|
partial
|
## Inspiration
With many Canadians facing mental health issues, we wanted to create a daily journaling app, which uses machine learning to recommend resources for mental well being.
## What it does
After a user records a 10 second video talking about their day, Menda uses emotion detection and facial recognition to recommend resources. Users can see a daily log of their well being, in which Menda curates personalized suggestions over time.
## How we built it
We used HTML/CSS for the Frontend, Firebase and Flask for the Backend, and OpenCV/Nltk for machine learning.
## Challenges we ran into
The biggest challenges that we faced were connecting ML models to Flask, and building entire application around Flask.
## Accomplishments that we're proud of
We are proud of our emotion detection function, sentiment analysis from speech to text, as well our minimalistic UX design.
## What we learned
We all took advantages of opportunities to improve on our technical skills. Although we've all participated in hackathons before, we still each picked up new skills. For example, everyone working on backend had the opportunity to experiment with Flask and Firebase, while those on frontend were able to enhance their HTML/CSS/JS skills.
## What's next for Menda
First, we want to create a community page, where others can share resources and discuss among peers. We also want to consider what happens as we increase the number of users since we store videos within our database. So, in order to scale, we would have to upgrade our Firebase database to be able to store more data. Lastly, we want to look for partnerships with related mental health organizations as well as apps that we utilize in Menda such as Spotify and Headspace!
|
## Github Repository
<https://github.com/deltahacksiii/deltahacksiii>
## Inspiration
Loaning money can be difficult, especially when interests rates are so high, and many loan sharks seem to have alterior motives when you can't find other means. Some groups have an increased difficulty due to their situation; they may be immigrants with a language barrier, refugees without a credit history, or people with a lower income striving for an education. What if there was an app that provided a trustworthy platform for more open minded loaners and put the focus back on benefitting borrowers as much as possible?
## What it does
Lendr is a reverse-auction loan community where borrowers can post an amount of money they need to loan, and lenders bid on lower and lower interest rates; the lowest rate takes the deal. Lenders are matched to borrowers in a tinder-style queue. A lender can swipe right on a loan and make a bid for a lower interest rate if they find the profile of the loaner to be promising and trustworthy, or they can swipe left on the profile if they are not interested. This way, every loaner gets an ideal match with minimal interest charged and a personal connection to a lender. The process just becomes a lot more fun and welcoming.
How can the world of finance benefit from this idea? These loans and money transfers can take place on the platforms of financial institutions, and can help future customers build up a sense of responsibility. Banks can take action on the bidding too and earn some extra income. Keeping track of all actions occuring on the community can provide some interesting insights and analytics about the industry and the current state of the economy.
## How we built it
Lendr is a web application built with the Node.js framework, the Express framework, and MySQL. We made use of the Cloud9 IDE for quick setup and collaboration. We also have a fancy landing page made with Wix.
## Challenges we ran into
Sending information to ourselves from the frontend proved to be harder than we expected. We had to look into some hacks/workarounds and ended up settling on an invisible form method. We were all new to Node.js so that was also a challenge to get started on.
## Accomplishments that we're proud of
A working product! We are excited to see how people will use and interact with our project.
## What we'd do differently...
Node.js was rewarding to learn, but we would have worked on a mobile application if we had more time for the setup and learning curve. A full software stack such as MEAN would have made it easier to set up the database and build nicer looking views. We'd also like to reorganize and separate our code and implement a real sign-up/login process rather than having everything wide open.
## What's next for Lendr
A mobile version for users on the go!
Payments done completely through the app/website and/or partnerships with banks!
Integration of algorithmic, real-time bidding!
A media centre with testimonials, articles, and follow-ups from users!
Machine learning to prioritize the loan match queue!
|
## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.

Tracks movement of hands and face to provide real-time analysis on expressions and body-language.

|
partial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.