anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
|---|---|---|---|
## Inspiration
A deep and unreasonable love of xylophones
## What it does
An air xylophone right in your browser!
Play such classic songs as twinkle twinkle little star, ba ba rainbow sheep and the alphabet song or come up with the next club banger in free play.
We also added an air guitar mode where you can play any classic 4 chord song such as Wonderwall
## How we built it
We built a static website using React which utilised Posenet from TensorflowJS to track the users hand positions and translate these to specific xylophone keys.
We then extended this by creating Xylophone Hero, a fun game that lets you play your favourite tunes without requiring any physical instruments.
## Challenges we ran into
Fine tuning the machine learning model to provide a good balance of speed and accuracy
## Accomplishments that we're proud of
I can get 100% on Never Gonna Give You Up on XylophoneHero (I've practised since the video)
## What we learned
We learnt about fine tuning neural nets to achieve maximum performance for real time rendering in the browser.
## What's next for XylophoneHero
We would like to:
* Add further instruments including a ~~guitar~~ and drum set in both freeplay and hero modes
* Allow for dynamic tuning of Posenet based on individual hardware configurations
* Add new and exciting songs to Xylophone
* Add a multiplayer jam mode
|
## Inspiration
Have you ever tried learning the piano but never had the time to do so? Do you want to play your favorite songs right away without any prior experience? This is the app for you. We required something like this even for our personal use. It helps an immense lot in building muscle memory to play the piano for any song you can find on Youtube directly.
## What it does
Use AR to project your favorite digitally-recorded piano video song from Youtube over your piano to learn by following along.
## Category
Best Practicable/Scalable, Music Tech
## How we built it
Using Unity, C# and the EasyAR SDK for Augmented Reality
## Challenges we ran into
* Some Youtube URLs had cipher signatures and could not be loaded asynchronously directly
* App crashes before submission, fixed barely on time
## Accomplishments that we're proud of
We built a fully functioning and user friendly application that perfectly maps the AR video onto your piano, with perfect calibration. It turned out to be a lot better than we expected. We can use it for our own personal use to learn and master the piano.
## What we learned
* Creating an AR app from scratch in Unity with no prior experience
* Handling asynchronously loading of Youtube videos and using YoutubeExplode
* Different material properties, video cipher signatures and various new components!
* Developing a touch gesture based control system for calibration
## What's next for PianoTunesAR
* Tunes List to save the songs you have played before by name into a playlist so that you do not need to copy URLs every time
* Machine learning based AR projection onto piano with automatic calibration and marker-less spatial/world mapping
* All supported Youtube URLs as well as an Upload your own Video feature
* AR Cam tutorial UI and icons/UI improvements
* iOS version
# APK Releases
<https://github.com/hamzafarooq009/PianoTunesAR-Wallifornia-Hackathon/releases/tag/APK>
|
# About the Project
Our project focuses on creating a decentralized federated learning system for hospitals, ensuring privacy and data confidentiality while enabling collaborative AI training across different institutions. The system allows hospitals to train AI models on their own medical data (like chest X-rays), and then securely share model parameters with a global server that combines and averages these models without ever accessing the sensitive data itself.
What inspired us to work on this project is the growing need for privacy-preserving machine learning, especially in the healthcare industry. With increasing concerns about data security and strict regulations like HIPAA, it’s critical to find ways to leverage data without violating confidentiality. Federated learning, combined with blockchain and cryptography techniques like zero-knowledge proofs (ZKPs) and CKKS encryption, offers a solution to these challenges.
Through this project, I learned how decentralized systems can foster innovation and collaboration without compromising data security. We integrated technologies such as Ethereum-based smart contracts and privacy-preserving cryptography to ensure model integrity and correctness, even without accessing the data itself.
# How We Built the Project
Data Privacy and Model Training: We designed two isolated hospital environments where each hospital trains a CNN model on its dataset.
Federated Server and Blockchain: Each hospital sends its model’s weights and training metadata (loss function, optimizer, model architecture) to a global server, which uses CKKS encryption to perform privacy-preserving weight averaging.
Zero Knowledge Proofs: To verify the training integrity without revealing sensitive data, ZKPs are employed, allowing the server to confirm the validity of training without accessing the actual data.
Smart Contracts for Verification: The verification process is handled using Ethereum-based smart contracts written in solidity and built over Rust-based Circom circuits, that ensure the authenticity of the training logs.
Frontend for Transparency: The frontend displays two key components: (1) Information about the global model, including its type, optimizer, and weights, and (2) Decentralized training logs, showing client IDs and training information.
Challenges Faced One major challenge was implementing privacy-preserving technologies like CKKS encryption and ZKPs into the federated learning workflow while maintaining the efficiency of the system. Another was designing the smart contract to handle training log verification and ensuring secure communication between hospitals and the global server. We also had to overcome resource constraints, particularly in handling large model weights in a decentralized and encrypted manner.
|
winning
|
## Inspiration
* Inspired by Tints clothing brand ([www.tintsstreetwear.com](http://www.tintsstreetwear.com)), we saw how personalized video thank-yous and abandoned cart follow-ups significantly boosted conversion rates and lowered CAC.
* Building relationships before sales proved effective but was extremely time-consuming, requiring manual efforts like social media interactions, content reposting, and connecting over shared interests.
* We are on a mission to assist millions of small business owners in accelerating the growth of their companies through AI-powered technology that decreases cost, and saves time
## What it does
* Gesture AI introduces an AI-Powered Video Personalization Tool that closes the gap in human element AI sales tools, improving business follow-ups and nurturing relationships through video automation, facial cloning, and synthetic media integration.
## How we built it
* By harnessing advanced AI technologies, we developed a system that transforms text into speech for video automation and uses lip-sync technology for realistic personalization. From uploading a contact list to deploying personalized videos through a batch API on GCP, our process uses OpenAI for script customization, ElevenLabs for audio generation, and a fine-tuned model for video synthesis, all invoked from the front-end.
## Challenges we ran into
* We faced challenges in creating ultra-realistic personalized videos and ensuring the seamless integration of synthetic media for genuine interactions.
## Accomplishments that we're proud of
* We are proud to have expanded our AI tool's capabilities across languages, enabling brands to personalize ads for customers in different languages, such as Spanish.
## What we learned
* We learned about the crucial importance of personalization in the modern digital landscape and AI's potential to revolutionize traditional marketing strategies, fostering authentic customer connections.
## What's next for Gesture AI
* We plan to broaden our market reach in B2B e-commerce and further optimize our platform to make it realistic. Additionally, we plan on expanding our influence beyond the e-commerce space into areas such as advocacy where personalized outreach can create societal change.
|
## What is Heather AI?
Check out our [demo](https://www.loom.com/share/1e7a9a11d2d24a17ae562e51259293ae?sid=a340a830-9e85-4fb0-9cd8-66de2a12204b)!
It is hard to keep a consistent schedule and it is very easy to lose track of things in your daily life. To provide some stability in one's life Heather AI acts as your secretary, aiming to maximize your productivity and help you in your day-to-day life. You can chat with Heather to ask questions but the main purpose of Heather is to help you schedule your day. By talking to Heather, you can add and remove events seamlessly on your Google Calendar, using a customized heuristic that takes into consideration practicality and development time. With this, we provide users with optimized scheduling where users can provide minimal information and generate helpful schedules. For example, all a user has to say is “I want to go to the gym 3 times this week” and Heather will automatically block out 3 times on your calendar to exercise, scheduling around other events. Access is here: [Heather AI](https://www.heatherai.in/)
## Accomplishments that we're proud of
We are very proud of our robust scheduling algorithm and the conversion between voice to speech. Delving into the technical implementation of the scheduling algorithm, we wanted to schedule events based on what timeframe we could complete them given that the time it took to complete them wasn’t equal to this entire interval (i.e. we may have a 3-hour window for a 1-hour task). This is known as the interval scheduling problem with release times and deadlines, a famous np-hard problem proven to have no constant approximation. However, we applied a combination of linear programming and the earliest deadline first heuristic to get a method to schedule events quickly in the vast majority of cases and defer to user input for any significant changes.
We also developed a robust way of taking the voice of a user, identifying events, and splitting them up into a machine-readable format for our scheduling algorithm. With this we can identify, modify, and delete events all seamlessly on the user's Google calendar, reducing the barrier of entry to the generation of a nice schedule.
## What's next for Heather AI
We just launched and we already have a growing waitlist (>100 people)! We plan to grow the functionality of our assistant and add integrations and monetization.
|
## What Inspired us
Due to COVID, many students like us have become accustomed to work on their schoolwork, projects and even hackathons remotely. This led students to use online resources at their disposal in order to facilitate their workload at home. One of the tools most used is “ctrl+f” which enables the user to quickly locate any text within a document. Thus, we came to a realisation that no such accurate method exists for images. This led to the birth of our project for this hackathon titled “PictoDocReader”.
## What you learned
We learned how to implement Dash in order to create a seamless user-interface for Python. We further learnt several 2D and 3D pattern matching algorithms such as, Knuth-Morris-Pratt, Bird Baker, Karp and Rabin and Aho-Corasick. However, only implemented the ones that led to the fastest and most accurate execution of the code.
Furthermore, we learnt how to convert PDFs to images (.png). This led to us learning about the colour profiles of images and how to manipulate the RGB values of any image using the numpy library along with matplotlib. We also learnt how to implement Threading in Python in order to run tasks simultaneously. We also learnt how to use Google Cloud services in order to use Google Cloud Storage to enable users to store their images and documents on the cloud.
## How you built your project
The only dependencies we required to create the project were PIL, matplotlib, numpy, dash and Google Cloud.
**PIL** - Used for converting a PDF file to a list of .png files and manipulating the colour profiles of an image.
**matplotlib** - To plot and convert an image to its corresponding matrix of RGB values.
**numpy** - Used for data manipulation on RGB matrices.
**dash** - Used to create an easy to use and seamless user-interface
**Google Cloud** - Used to enable users to store their images and documents on the cloud.
All the algorithms and techniques to parse and validate pixels were all programmed by the team members. Allowing us to cover any scenario due to complete independence from any libraries.
## Challenges we faced
The first challenge we faced was the inconsistency between the different RGB matrices for different documents. While some matrices contained RGB values, others were of the form RGBA. Therefore, this led to inconsistent results when we were traversing the matrices. The problem was solved using the slicing function from the numpy library in order to make every matrix uniform in size.
Trying to research best time complexity for 2d and 3d pattern matching algorithms. Most algorithms were designed for square images and square shaped documents. While we were working with any sized images and documents. Thus, we had to experiment and alter the algorithms to ensure they worked best for our application.
When we worked with large PDF files, the program tried to locate the image in each page one by one. Thus, we needed to shorten the time for PDFs to be fully scanned to make sure our application performs its tasks in a viable time period. Hence, we introduced threading into the project to reduce the scanning time when working with large PDF files as each page was scanned simultaneously. Although we have come to the realisation that threading is not ideal as the multi-processing greatly depends on the number of CPU cores of the user’s system. In an ideal world we would implement parallel processing instead of threading.
|
losing
|
# Resupplie
(Pronounced 'reh-supp-lee') this is your number one cuisine companion!
Resuppie recommends meal recipies inspired by the contents of your fridge & pantry.
Resuppie prepares for future meals & shopping lists based on your tastes & preferences.
|
## Inspiration
2020 had us indoors more than we'd like to admit and we turned to YouTube cooking videos for solace. From Adam Ragusea to Binging with Babish, these personalities inspired some of us to start learning to cook. The problem with following along with these videos is that you have to keep pausing the video while you cook. Or even worse, you have to watch the entire video and write down the steps if they're not provided in the video description. We wanted an easier way to summarize cooking videos into clear steps.
## What it does
Get In My Belly summarizes Youtube cooking videos into text recipes. You simply give it a YouTube link and the web app generates a list of ingredients and a series of steps for making the dish (with pictures), just like a recipe in a cook book. No more wondering where they made the lamb sauce. :eyes:
## How we built it
We used React for front-end and Flask for back-end. We used [Youtube-Transcript-API](https://pypi.org/project/youtube-transcript-api/) to convert Youtube videos to transcripts. The transcripts are filtered and parsed into the resulting recipe using Python with the help of the [Natural Language Toolkit](https://www.nltk.org/) library and various text-based, cooking-related datasets that we made by scraping our favourite cooking videos. Further data cleaning and processing was done to ensure the output included quantities and measurements alongside the ingredients. Finally, [OpenCV](https://opencv.org/) was used to extract screenshots based on time-stamps.
## Challenges we ran into
Determining the intent of a sentence is pretty difficult, especially when someone like Binging with Babish says things that range from very simple (`add one cup of water`) to semantically-complex (`to our sauce we add flour; throw in two heaping tablespoons, then add salt to taste`). We converted each line of the transcription into a Trie structure to separate out the ingredients, cooking verbs, and measurements.
## Accomplishments that we're proud of
We really like the simplicity of our web app and how clean it looks. We wanted users to be able to use our system without any instruction and we're proud of achieving this.
## What we learned
This was the first hackathon for two of our three members. We had to quickly learn how to budget our time since it's a 24-hour event. Perhaps most importantly, we gained experience in deciding when a feature was too ambitious to achieve within time constraints. For other members, it was their first exposure to web-dev and learning about Flask and React was mind boggling.
## What's next for Get In My Belly
Future changes to GIMB include a more robust parsing system and refactoring the UI to make it cleaner. We would also like to support other languages and integrate the project with other APIs to get more information about what you're cooking.
|
## Inspiration
Cooking up something delicious has never been easier! Introducing our AI-powered recipe recommendation engine that suggests mouth-watering meals based on the ingredients you have on hand. Say goodbye to the hassle of deciding what to make for dinner or waste of unused ingredients. Simply input your ingredients, and let our system do the rest. Impress your family and friends with your newfound culinary skills and never run out of meal ideas again!
## How we built it
We used Edamam's API, an open-source API for the foodies, to fetch different recipes, their ingredients, and basic nutritional facts. we incorporated it into our backend to functionally generate correct recipes on the basis of the choice of user's ingredients. We also constructed an interactive and user-friendly interface using HTML, CSS, and JavaScript. Finally, we designed a logo for the application.
## Challenges we ran into
Given our rudimentary skills in stack technology (backend/frontend) and the tight timeframe, We managed to tackle different technical difficulties such as the reception of multiple inputs from the user and fetching the appropriate recipes.
## Accomplishments that we're proud of
We were able to incorporate what we needed in terms of functionality and aesthetics into the web app.
## What's next for Recipizer
Season the web app with better frontend design with more advanced frameworks or tools.
|
partial
|
## Inspiration
One of the biggest socio-economic challenges that the world is encountering and least talked about is the aging population and how the divide between the older and younger generation could potentially disrupt societies (issues like Brexit, 'OK Boomer', increase in elderly crime rate in South Korea, etc.). Unlike in the past, with improved healthcare, the populations in most developed countries enjoy longer life expectancy with healthier lifestyles. As a consequence, the older population have a lot of value to offer to the younger generation. According to a survey conducted in Feb 2017 in the US, 77% of the adults are willing to engage with the younger generation in building meaningful intergenerational relationships.
## What it does
This is an app to connect youth and seniors to build meaningful intergenerational relationships to foster an engaged community. The users can express their preferences and match appropriately with their counterparts. They can use the in-house messaging service and plan meet-ups.
## How we built it
We primarily used python, flask, html and css to build this app. We built the app using python and flask. Azure was used to connect a database to the app so as to store the user information.
## Challenges we ran into
* Learning how to use Azure for SQL database and Web App Deployment
* Connecting a database to the app.
* Problem shooting & Debugging Issues related to ODBC driver
* Incorporate html elements with minimal/no-background.
* Utilising Flask for simulated user-login journey
* Segregating Pages depending on verified authentification.
## Accomplishments that we're proud of
We are proud of the fruition from ideation in the past 20 hours to build a basic product to serve a purpose that the society could benefit from. We learnt a lot not just from this idea, but also from the other countless ideas that were genereated during our team's 6 hour brain storming our team endured.
## What's next for YES (Youth Engaging Seniors)
* Better improve the app UI, which was primarily limited by our HTML and Flask familiarity. For a better visualisation of our app, please refer to the App Mock-up in the provided Figma sketch.
* The next step would be to incorporate an incentive based mentorship program which could be sponsored by governments to provide subsidized travel, project grants, tuition fee credits, etc.
* Allow a beta deployment and get some feedback from users.
* Pitch this idea to investors to scale up the same.
|
## Inspiration
We live in an era where consumer behavior has an outsized impact on global sustainability. 60% of global greenhouse gas emissions are linked to consumer activities, including producing, consuming, and disposing of goods.
However, despite growing awareness, many consumers struggle to find reliable sustainability information at the point of purchase. They don’t know if the product they’re buying is sustainable or not.
That’s why we came up with EcoLens. We believe that if consumers had better access to sustainability information at the moment of decision-making, they would be more empowered to make eco-conscious choices.
## What it does
EcoLens is a browser extension designed to provide sustainability information directly to shoppers as they browse e-commerce platforms like Amazon. When they go to a product page, a notification will pop up asking them if they want to learn more about the product’s sustainability performance.
Clicking ‘learn more’ will then display the following information:
Sustainability Ratings: Instantly see a product’s sustainability score based on materials, environmental impact, and the company’s ESG (Environmental, Social, and Governance) ratings.
Product Transparency: Learn about the ingredients or materials used in the product and whether they contribute to pollution, climate change, or waste.
Sustainable Alternatives: If a product isn’t sustainable, EcoLens will suggest eco-friendly alternatives with similar functionality
Our extension also have gamification feature where each user will adopt an ‘Earth pet.’ Feeding it with purchases will affect the pet’s health differently. This helps with tracking the number of sustainable/unsustainable purchases, therefore how much more eco-friendly a user’s purchasing habits have become by using the extension
## How we built it
EcoLens is a cutting-edge browser extension developed using a powerful combination of HTML, CSS, and JavaScript, which forms the backbone of our user interface. We employed advanced web scraping techniques to gather comprehensive product information from e-commerce platforms like Amazon, extracting critical data points such as product specifications, materials, and environmental impact assessments.
To analyze and categorize the sustainability of each product, we leveraged Gemini, our AI-driven data processing engine. This engine utilizes machine learning algorithms to evaluate products based on a multifaceted sustainability framework, considering factors such as materials used, lifecycle emissions, and company ESG (Environmental, Social, and Governance) ratings. The processed data is then dynamically rendered in the extension's user interface, ensuring that users receive real-time sustainability insights. Also, took advantage of AWS to host a database so users can log into their own accounts to view their sustainability score.
The extension features an interactive pop-up that prompts users with a notification upon landing on a product page. By clicking ‘Learn More,’ users access detailed sustainability ratings, product transparency information, and eco-friendly alternatives—all presented in a user-friendly format.
## Challenges we ran into
One major challenge we had was being able to scrape the product site, get a response from Gemini, and then display that result to the user in real-time with minimal delay. When we first started the whole process took about 5 seconds which is enough time for the user to lose interest. However, towards the end we were able to get the whole process down to less than a second by implementing techniques such as asynchronous data fetching to allow simultaneous operations, caching frequently accessed data to minimize redundant requests, and optimizing our scraping algorithms for faster extraction. Also, We had the issue with limitations in web scraping especially for big companies such as Amazon. To overcome this, we had to develop robust scraping algorithms that could efficiently navigate these obstacles, including implementing headless browsers and delay mechanisms to mimic human behavior
## Accomplishments that we're proud of
We are really proud of the fact that we were able to reduce that process time from over 5 seconds to under a second. This is very monumental since it not only enhances user satisfaction but also significantly boosts engagement with our browser extension. By achieving this level of efficiency, we can provide users with instantaneous sustainability insights at the critical moment of their purchasing decisions, thereby empowering them to make informed, eco-conscious choices without the frustration of delays. This optimization exemplifies our commitment to delivering a seamless and impactful user experience, which is crucial in driving positive change in consumer behavior towards sustainability.
## What we learned
One major thing we learned was how to make a video on canva. None of us had experience with editing videos, especially with canva. So, that was a really fun experience. Also, the idea of implementing headless browsers and delay mechanisms to mimic human behavior is completely new and not something we have had experience with.
## What's next for Ecolens
Expanding to more platforms: EcoLens now works primarily on Amazon. However, we’re looking to integrate with many other e-commerce platforms as well, such as eBay and Etsy.
Mobile App: A mobile version of EcoLens for mobile shopping apps
Gamification: Add badges, points, achievements, and more features
Education: Offer a mode to help users learn more information about sustainability while browsing
|
## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
|
losing
|
## Inspiration
Our inspiration came from the danger of skin-related diseases, along with the rising costs of medical care. DermaFix not only provides an alternative free option for those who can't afford to visit a doctor due to financial issues but provides real-time diagnosis.
## What it does
Scans and analyzes the user's skin, determining if the user has any sort of skin disease. If anything is detected, possible remedies are provided, with a google map displaying nearby places to get treatment.
## How we built it
We learned to create a Flask application, using HTML, CSS, and Javascript to develop the front end. We used TensorFlow, feeding an image classifier machine learning model to differentiate between clear skin and 20 other skin diseases.
## Challenges we ran into
Fine-tuning the image classifying model to be accurate at least 85% of the time.
## Accomplishments that we're proud of
Creating a model that is accurate 95% of the time.
## What we learned
HTML, CSS, Flask, TensorFlow
## What's next for Derma Fix
Using a larger dataset for a much more accurate diagnosis, along with more APIs to be used, in order to contact nearby doctors, and automatically set appointments for those that need it
|
## Inspiration
For years, the field of dermatology has been facing a myriad of challenges, some of which include the accuracy and efficiency of diagnosing skin conditions. Skin disease diagnoses were traditionally based on visual inspections, observations, and measurements. However, the accuracy of these methods will vary between healthcare professionals and dermatologists, depending on their experience and skills. As a result, skin conditions are often misdiagnosed, leading to severe consequences. With the advancement of Machine Learning in recent years, healthcare professionals and dermatologists can use this technology to gain faster and more precise results. Our team was inspired by the challenges that skin conditions pose, and by the immense potential that ML could bring to the field of dermatology.
## What Our Project Does
Our website is capable of classifying 5 different types of skin conditions (blackhead, cyst, whitehead, papule, and pustule) using selfies taken or uploaded by users. From there, we would provide users with tailored insights on how to alleviate the condition.
## How We Built It
We trained an acne classifier from 200 images using TensorFlow and Keras. Then, we built the back-end component using Flask and the front-end using React.
## Challenges We Ran Into
During the project, we encountered several issues
* For the ML model, we weren't able to find a lot of available quality images that could be used to train the model.
* At first, we planned to built the website using only React and encountered a huge obstacles with parsing JSON file since some Keras layers were not compatible with TensorFlow.js, so we decided to implement the back-end using Flask to solve the issue.
* Our team didn't have prior experience with back-end so we also faced some issues while implementing it.
## Accomplishments that we're proud of
Using only a very limited set of data, we managed to build our first ML model capable of classifying 5 different types of skin conditions, ***achieving an accuracy of more than 60%***
## What We Learned
As for the technical part, we learned about creating a Machine Learning model using TensorFlow and Keras, and about how to create a full-stack web app with Flask and React. Besides, we also acquired a lot of knowledge about skin conditions, especially the 5 types that we trained our model to classify.
## Future Improvements
* Further enhance the accuracy of our model with a better dataset.
* Train our model to classify the severity of skin conditions, skin types, and skin tones.
* Implement the feature to create personalized skincare regimes based on the given information.
* Develop a mobile app for the project.
* Improve the user experience by making it compatible with more devices.
|
## Inspiration
When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body.
As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately.
## What it does
The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals.
## How we built it
Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality.
## Challenges we ran into
The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave.
## Accomplishments that we're proud of
We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying.
## What we learned
One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people.
## What's next for SkinSkan
We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
|
losing
|
## Inspiration
While caught in the the excitement of coming up with project ideas, we found ourselves forgetting to follow up on action items brought up in the discussion. We felt that it would come in handy to have our own virtual meeting assistant to keep track of our ideas. We moved on to integrate features like automating the process of creating JIRA issues and providing a full transcript for participants to view in retrospect.
## What it does
*Minutes Made* acts as your own personal team assistant during meetings. It takes meeting minutes, creates transcripts, finds key tags and features and automates the process of creating Jira tickets for you.
It works in multiple spoken languages, and uses voice biometrics to identify key speakers.
For security, the data is encrypted locally - and since it is serverless, no sensitive data is exposed.
## How we built it
Minutes Made leverages Azure Cognitive Services for to translate between languages, identify speakers from voice patterns, and convert speech to text. It then uses custom natural language processing to parse out key issues. Interactions with slack and Jira are done through STDLIB.
## Challenges we ran into
We originally used Python libraries to manually perform the natural language processing, but found they didn't quite meet our demands with accuracy and latency. We found that Azure Cognitive services worked better. However, we did end up developing our own natural language processing algorithms to handle some of the functionality as well (e.g. creating Jira issues) since Azure didn't have everything we wanted.
As the speech conversion is done in real-time, it was necessary for our solution to be extremely performant. We needed an efficient way to store and fetch the chat transcripts. This was a difficult demand to meet, but we managed to rectify our issue with a Redis caching layer to fetch the chat transcripts quickly and persist to disk between sessions.
## Accomplishments that we're proud of
This was the first time that we all worked together, and we're glad that we were able to get a solution that actually worked and that we would actually use in real life. We became proficient with technology that we've never seen before and used it to build a nice product and an experience we're all grateful for.
## What we learned
This was a great learning experience for understanding cloud biometrics, and speech recognition technologies. We familiarised ourselves with STDLIB, and working with Jira and Slack APIs. Basically, we learned a lot about the technology we used and a lot about each other ❤️!
## What's next for Minutes Made
Next we plan to add more integrations to translate more languages and creating Github issues, Salesforce tickets, etc. We could also improve the natural language processing to handle more functions and edge cases. As we're using fairly new tech, there's a lot of room for improvement in the future.
|
## Inspiration
Prolonged covid restrictions have caused immense damage to the economy and local markets alike. Shifts in this economic landscape have led to many individuals seeking alternate sources of income to account for the losses imparted by lack of work or general opportunity. One major sector that has seen a boom, despite local market downturns, is investment in the stock market. While stock market trends at first glance, seem to be logical, and fluid, they're in fact the opposite. Beat earning expectation? New products on the market? *It doesn't matter!*, because at the end of the day, a stock's value is inflated by speculation and **hype**. Many see the allure of rapidly increasing ticker charts, booming social media trends, and hear talk of town saying how someone made millions in a matter of a day *cough* **GameStop** *cough* , but more often then not, individual investors lose money when market trends spiral. It is *nearly* impossible to time the market. Our team sees the challenges and wanted to create a platform which can account for social media trends which may be indicative of early market changes so that small time investors can make smart decisions ahead of the curve.
## What it does
McTavish St. Bets is a platform that aims to help small time investors gain insight on when to buy, sell, or hold a particular stock on the DOW 30 index. The platform uses the recent history of stock data along with tweets in the same time period in order to estimate the future value of the stock. We assume there is a correlation between tweet sentiment towards a company, and it's future evaluation.
## How we built it
The platform was build using a client-server architcture and is hosted on a remote computer made available to the team. The front-end was developed using react.js and bootstrap for quick and efficient styling, while the backend was written in python with flask. The dataset was constructed by the team using a mix of tweets and article headers. The public Twitter API was used to scrape tweets according to popularity and were ranked against one another using an engagement scoring function. Tweets were processed using a natural language processing module with BERT embeddings which was trained for sentiment analysis. Time series prediction was accomplished through the use of a neural stochastic differential equation which incorporated text information as well. In order to incorporate this text data, the latent representations were combined based on the aforementioned scoring function. This representation is then fed directly to the network for each timepoint in the series estimation in an attempt to guide model predictions.
## Challenges we ran into
Obtaining data to train the neural SDE proved difficult. The free Twitter API only provides high engagement tweets for the last seven days. Obtaining older tweets requires an enterprise account costing thousands of dollars per month. Unfortunately, we didn’t feel that we had the data to train an end-to-end model to learn a single representation for each day’s tweets. Instead, we use a weighted average tweet representation, weighing each tweet by its importance computed as a function of its retweets and likes. This lack of data extends to the validation side too, with us only able to validate our model’s buy/sell/hold prediction on this Friday's stock price.
Finally, without more historical data, we can only model the characteristics of the market this week, which has been fairly uncharacteristic of normal market conditions. Adding additional data for the trajectory modeling would have been invaluable.
## Accomplishments that we're proud of
* We used several API to put together a dataset, trained a model, and deployed it within a web application.
* We put together several animations introduced in the latest CSS revision.
* We commissioned McGill-themed banner in keeping with the /r/wallstreetbets culture. Credit to Jillian Cardinell for the help!
* Some jank nlp
## What we learned
Learned to use several new APIs, including Twitter and Web Scrapers.
## What's next for McTavish St. Bets
Obtaining much more historical data by building up a dataset over several months (using Twitters 7-day API). We would have also liked to scale the framework to be reinforcement based which is data hungry.
|
## Inspiration
We live in a diverse country with people from all ethnic backgrounds. Not everybody speaks or understands English perfectly. For people, whose first language is not English, information on government websites can get complicated and confusing. People waste a lot of time trying to access the information they need. We also realized that there is no universal Q&A tool that can detect and provide answers to questions in any language for both voice and text. This inspired us to come up with a platform that can help users access information in their preferred language and mode of communication (voice/text).
## What it does
Artemis is a user-friendly Q & A chatbot integrated with the ca.gov website to help users with any questions that they have in the language they are comfortable with. It automatically detects the language your phone/system is configured to and responds to both voice/text-based questions. Currently, it can detect over 24 languages. Artemis has a simple and intuitive interface making it super easy to use and can be used across platforms (both web and mobile).
## How we built it
First, we mapped some of the ca.gov websites' FAQ to the QnA maker database and created a chatbot with Microsoft Bot Framework which we then linked to the database we made. We then added the web chat to the ca.gov page, created the cognitive speech service in Azure, linked it to the webchat client bot, connected the speech to text/text to speech API, added the translation middleware, and deployed everything to GitHub pages and Azure cloud. Meanwhile, the Sketch assets were designed and developed and integrated into the main Microsoft HTML file.
Built with Visual Studio Code and designed in Sketch.
## Challenges we ran into
We faced an Azure bot deployment error, everything crashed. We also faced errors in API calls and unauthorized headers and tokens. Another challenge was trying to integrate the CSS code with the main Microsoft HTML file to make the backend match the created designs. Eventually, it all worked out!
## Accomplishments that we are proud of
We are very proud that our project can impact millions of people who do not speak English as their first language access information on government websites. The added voice functionality makes it super easy to use for all people including older generations and people who are differently-abled.
## What's next for Artemis
Over time we believe the technology can be adapted by many more enterprises and organizations for their knowledge base and customer support making information more accessible for all!
|
winning
|
## Inspiration
Shohruz, the president of Hunter’s first CS Club, created this club to provide introductory-level educational content to all students interested in CS at Hunter College. Sumayia, the secretary of the club, noticed students struggling in their courses on the Discord server and began providing academic sources in the club’s email newsletters. Sumayia also has a younger sister who is currently experiencing the effects of the pandemic on her education. Thus, it propelled the team to develop an educational tool that can hopefully be integrated into classroom environments.
During the pandemic, Ynalois performed poorly in her college-level science classes. She lost the support that teachers offered in person. She searched for it online, in hopes that the internet could replace that connection. That’s when she realized the value of assistance.
## What it does
Users are able to enter an academic subject into the input box to generate a practice question based on the subject. If the user requires assistance, they can request a hint. If the user wants the answer, they can prompt the solution by clicking a button. By default, the user only gets a maximum of 5 free tries, and once you run out, you have to purchase the premium plan to unlock access to unlimited tries.
## How we built it
We used React for the frontend, Node and Express for our backend, and OpenAI's API to generate the practice questions, hints, and answers.
## Challenges we ran into
We wanted to implement Stripe's payment gateway system to collect payments from users for our premium plan. Due to time constraints, we weren't able to implement it.
We also wanted to display all 3 hints in succession after each click, with each hint getting more specific than the previous one. This would help the user gain confidence in their problem-solving skills for that specific subject.
## Accomplishments that we're proud of
We're proud that we were able to successfully set up and integrate OpenAI's API as our MVP to generate practice questions, and then after, be able to generate hints and the answer to the AI-generated practice question.
Moreover, we are also proud of deriving a witty compound word for our website: HoneyDo. Honey is a sweet substance and dew is the condensation of rainwater—which induces relaxation. We aim to deliver education in a sweet and condensed manner.
## What we learned
Shohruz - I learned how to use OpenAI's chat completion API and furthered my understanding of backend development.
Ynalois - I learned backend development for the first time using Node, Express, and OpenAI's API.
Sumayia - I learned frontend development and React for the first time to create dynamic components.
In a broader context, the pandemic had detrimental effects on students across the globe. In Boston, "60 percent of students at some high-poverty schools have been identified as at high risk for reading problems— twice the number of students as before the pandemic, according to Tiffany P. Hogan, director of the Speech and Language Literacy Lab at the MGH Institute of Health Professions in Boston." If this disparity persists, "poor readers are more likely to drop out of high school, earn less money as adults and become involved in the criminal justice system." The pandemic didn't solely affect low-income groups, but "children in every demographic group."
Billions of federal stimulus dollars are flowing to districts for tutoring and other support, but their effect may be limited if schools cannot find quality staff members to hire. This is where AI comes into play! Our educational tool implements AI to assist students on any subject of their choosing with an option of receiving 3 hints.
## What's next for HoneyDo
To ensure profitability and scalability, we plan to implement Stripe's payment gateway to be able to handle transactions.
Discords-
codez\_
ynabanina
sumayia04
|
## Inspiration 🧑🎓
Imagine: your friend tests you on a quiz, right before it starts. All of a sudden, you're asked questions you never even thought of, studying alone.
Oftentimes, having an outside perspective helps prevent acute tunnel vision when studying for exams. However, students might not always have their trusty pal to help them revise for tests. With **sessions.ai**, we sought out to give every student a study buddy- no matter the time or place.
## What it does 📖
Like a study buddy that studies the same thing you do, **sessions.ai** watches over your shoulder and keeps you accountable while you study- all the while quizzing you and filling the gaps in your knowledge.
It does this through **active recall**, a studying technique which involves taking a topic the student wishes to learn, creating questions based on that topic, and then repeatedly testing the student on those questions.
The student is thrust into 20-30min long **sessions**, followed by short and succinct questions made to simulate an exam scenario, testing their knowledge on what they have learned/retained on the topic they have studied in the session.
## How we built it 🔨
The building process of sessions.ai can be broadly broken down into three sections:
1. Frontend, UX
2. Creating the **sessions.ai question engine**
3. Engine/frontend connections
**1. Frontend, UX**
Our goal in creating the frontend was a playful, straightforward experience making it easy for anyone to use the platform. The web application is built using NextJs App Router, along with TailwindCSS for a refined user interface and Zustand to manage global state. This combination allowed for quick development, iteration, and state management, letting us fine-tune our product every step of the way.
**2. Creating the **sessions.ai question engine****
Our primary objective in creating the question engine that would ultimately power sessions.ai was to be able to generate relevant questions to the study material. On opening the app a PDF of the syllabus is passed into the backend server, which is then parsed and sent to Cohere to serve as context for the subsequent queries.
↓
As for capturing text off the screen, the method we landed on was utilizing OpenCV's "ImageGrab" function to continuously stream the user's desktop by capturing a series of images. Next, with the help of OCR software (Tesseract OCR), all words are extracted from the student's study material of choice (PDF, slideshow, textbook, etc.). These will aid in the creation of curated questions for the student.
↓
After some intermediate parsing and cleaning up of the OCR text, the notes are sent over the wire to the backend server which processes and feeds them into Cohere, which in turn returns a list of relevant multiple-choice, short, and long answer questions/answers to study with.
↓
These questions are then displayed to the user!
**3. Engines/Frontend connections**
* The connections between the frontend and backend (engines/frontend connections) were facilitated via an API layer between the OCR and Web interface.
## Challenges we ran into 🏃♂️
OpenCV, the software we used for screen recording was often tricky to work with. Thankfully, ImageGrab from PIL came into the picture and made the screen recording aspect of sessions.ai far simpler.
Our project plan seemed easy at first, but in practice implementing a native screen reader that closely interacts with a web application is tricky. We overcame this problem using a local hosted web app. We could then achieve a close to native feel to our desktop application with the flexibility and ease of iteration that comes with a web app.
Prompt engineering is difficult! Getting Cohere to process our data in a satisfactory manner and return an object which in turn could be parsed by our backend was difficult, especially considering the fact that there were a few bugs in the Cohere API which made JSON validation difficult to implement service-side. Thankfully, we were able to mitigate this on our end. Phew!
Ensuring API consistency between the frontend and backend required a lot of communication between all of us, especially considering the nested nature of the format (and the often cryptic ways these errors can show up, especially in weakly typed languages **cough cough js**)
We had a few native components (most notably a border that indicates screen recording is active), and getting the app to look well enough on macOS (our target platform for now) required quite a bit of tinkering. We eventually had to go a bit low-level and interact with the native Objective-C Cocoa/AppKit framework to achieve the results we wanted.
These were just a few of the challenges we faced; if we were to elaborate on all of them we would likely run out of space 😅 Suffice it to say there were definitely hardships, but these pale in comparison to the satisfaction of finishing our project and getting it to a well-functioning state.
## Accomplishments that we're proud of 🏆
Creating sessions.ai was quite difficult, especially considering how we were tying together so many different frameworks and services (many of which we only became comfortable with in the past 36 hours). They say necessity breeds innovation; considering the many umm, *exotic* ways in which we managed to tie things together, we can attest to this.
Towards the end, we made sure to gather feedback from mentors, volunteers, and people external to the team. Their suggestions and help made **sessions.ai \*\*even better**, and we are very, very grateful!
Outside of those, being able to create a product that will make a change in the way students study is exciting. We thrive on being able to make change, and we're super excited to see what students will do with sessions.ai.
## What we learned
Again, so so much; if we were to elaborate on everything we learned, we would probably run out of space! But nonetheless, we all learned quite a bit about the numerous languages, frameworks, and APIs we used: Cohere, Next.js, Tailwind, Python, Flask, etc, and how to bring them all together to create a coherent (get the pun) product.
## What's next for sessions.ai
**Validation, validation, validation!** (In terms of the questions and answers). We were about to implement this; unfortunately, we ran out of time :( But this is of utmost importance, and is definitely something that will be added in the near future.
We also plan on growing its capabilities even further with LaTeX parsing. The Tesseract OCR software would ever-so-slightly have trouble with mathematics-based problems, but we expect with a LaTex-parsing integration, these troubles will no longer exist.
We would also implement user auth and create a native app in Tauri/Electron.
Here's a fun reel we made about the experience: <https://youtu.be/5O64DHUfKjE>
Thank you for reading!
|
## Inspiration
The inspiration for our Auto-Teach project stemmed from the growing need to empower both educators and learners with a **self-directed and adaptive** learning environment. We were inspired by the potential to merge technology with education to create a platform that fosters **personalized learning experiences**, allowing students to actively **engage with the material while offering educators tools to efficiently evaluate and guide individual progress**.
## What it does
Auto-Teach is an innovative platform that facilitates **self-directed learning**. It allows instructors to **create problem sets and grading criteria** while enabling students to articulate their problem-solving methods and responses through text input or file uploads (future feature). The software leverages AI models to assesses student responses, offering **constructive feedback**, **pinpointing inaccuracies**, and **identifying areas for improvement**. It features automated grading capabilities that can evaluate a wide range of responses, from simple numerical answers to comprehensive essays, with precision.
## How we built it
Our deliverable for Auto-Teach is a full-stack web app. Our front-end uses **ReactJS** as our framework and manages data using **convex**. Moreover, it leverages editor components from **TinyMCE** to provide student with better experience to edit their inputs. We also created back-end APIs using "FastAPI" and "Together.ai APIs" in our way building the AI evaluation feature.
## Challenges we ran into
We were having troubles with incorporating Vectara's REST API and MindsDB into our project because we were not very familiar with the structure and implementation. We were able to figure out how to use it eventually but struggled with the time constraint. We also faced the challenge of generating the most effective prompt for chatbox so that it generates the best response for student submissions.
## Accomplishments that we're proud of
Despite the challenges, we're proud to have successfully developed a functional prototype of Auto-Teach. Achieving an effective system for automated assessment, providing personalized feedback, and ensuring a user-friendly interface were significant accomplishments. Another thing we are proud of is that we effectively incorporates many technologies like convex, tinyMCE etc into our project at the end.
## What we learned
We learned about how to work with backend APIs and also how to generate effective prompts for chatbox. We also got introduced to AI-incorporated databases such as MindsDB and was fascinated about what it can accomplish (such as generating predictions based on data present on a streaming basis and getting regular updates on information passed into the database).
## What's next for Auto-Teach
* Divide the program into **two mode**: **instructor** mode and **student** mode
* **Convert Handwritten** Answers into Text (OCR API)
* **Incorporate OpenAI** tools along with Together.ai when generating feedback
* **Build a database** storing all relevant information about each student (ex. grade, weakness, strength) and enabling automated AI workflow powered by MindsDB
* **Complete analysis** of student's performance on different type of questions, allows teachers to learn about student's weakness.
* **Fine-tuned grading model** using tools from Together.ai to calibrate the model to better provide feedback.
* **Notify** students instantly about their performance (could set up notifications using MindsDB and get notified every day about any poor performance)
* **Upgrade security** to protect against any illegal accesses
|
losing
|
## Inspiration
Our inspiration is based on hearing from industry colleagues that they often need to do research to keep up with the new research occurring in their domain. Research materials are often spread across multiple different sites and require users to actively search through information.
## What it does
We have created a web application that scrapes RSS feeds from the web and consolidates the information.
## How we built it
For our backend we are using google cloud Postgres storage, as well as pythonFastApi; and our front end uses react and a CSS framework called bulma.
## Challenges we ran into
We ran into some challenges with integrating with the database
## Accomplishments that we're proud of
We are proud of the various moving pieces coming together from the backend to the front end.
## What we learned
We learned about various different tools to help with development as well as the value of working in a team to debug/ troubleshoot issues.
## What's next for nuResearch
For the future, we also plan to use Twillio Sendgrid to send out email notifications to subscribers.
|
## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
|
## Inspiration
We were inspired by the need for alternatives to the long and costly process of getting bank loans today. Over 30 million Americans visit pawn shops every year, but are often offered much lower prices than they could be because pawn shop owners are often not able to sell their items to the best buyer. Pawnhub connects people looking for low interest rate loans with people interested in alternative investments in an exciting and lucrative new financial area.
## What it does
Pawnhub lets people easily get loans at lower rates than the current alternatives by pawning their physical items to others on the platform.
Instead of paying high rates for payday loans or other financial industry alternatives, users can quickly get access to cash at the low rates that come with a collateral backed loan. We also allow people with money to spare to invest that in a new and innovative way, earning higher interest rates than they could get from other investment options.
Pawnhub connects these two groups by being the third-party escrow that borrowers send their items to
## How we built it
We built Pawnhub using PHP, Javascript, and Swift for our iOS application. The website is fully responsive and accessible from almost any device.
## Challenges we ran into
Coming into the hackathon, most of our team had little experience in programming, particularly with web and mobile applications.
We also had trouble coming up with ideas in the fintech space. Because fintech tends to be highly dependent on regulation, it was a challenge to come up with a hack that was feasible given the financial regulatory environment. We went through a number of ideas before we arrived on PawnHub.
## Accomplishments that we're proud of
We are proud of everything that we've learned and the way that we took an idea and turned it into something that can improve people's lives.
## What we learned
We all greatly improved our web development skills, particularly with regards to marketplaces and apps where users interact with each other.
## What's next for Pawnhub
Partnering with appraisers and other experts on common items that are pawned such as jewellery, watches, and electronics. We would also like to partner with a fulfillment and storage company that could store the collateral items for us while the loans are in progress and be able to quickly send them back to our users once the loans are paid.
|
partial
|
## What it does
Tabs allows you and your friends to easily keep track of your borrowed money, without needing to transfer any funds.
## Challenges we ran into
We originally wanted to do a hardware project, but didn't have all the supplies we needed so we struggled a little bit with coming up with a new idea.
## Accomplishments that we're proud of
We think the app looks really cute!
## What we learned
None of us have made an app before and this was our first time using React Native, so we all got to learn the basics of the language.
|
## Inspiration
With the cost of living increasing yearly and inflation at an all-time high, people need financial control more than ever. The problem is the investment field is not beginner friendly, especially with it's confusing vocabulary and an abundance of concepts creating an environment detrimental to learning. We felt the need to make a clear, well-explained learning environment for learning about investing and money management, thus we created StockPile.
## What it does
StockPile provides a simulation environment of the stock market to allow users to create virtual portfolio's in real time. With relevant information and explanations built into the UX, the complex world of investments is explained in simple words, one step at a time. Users can set up multiple portfolios to try different strategies, learn vocabulary by seeing exactly where the terms apply, and access articles tailored to their actions from the simulator using AI based recommendation engines.
## How we built it
Before starting any code, we planned and prototyped the application using Figma and also fully planned a backend architecture. We started our project using React Native for a mobile app, but due to connection and network issues while collaborating, we moved to a web app that runs on the phone using React.
## Challenges we ran into
Some challenges we faced was creating a minimalist interface without the loss of necessary information, and incorporating both learning and interaction simultaneously. We also realized that we would not be able to finish much of our project in time, so we had to single out what to focus on to make our idea presentable.
## Accomplishments that we're proud of
We are proud of our interface, the depth of which we fleshed out our starter concept, and the ease of access of our program.
## What we learned
We learned about
* Refining complex ideas into presentable products
* Creating simple and intuitive UI/UX
* How to use react native
* Finding stock data from APIs
* Planning backend architecture for an application
## What's next for StockPile
Next up for StockPile would be to actually finish coding the app, preferably in a mobile version over a web version. We would also like to add the more complicated views, such as explanations for candle charts, market volume charts, etc. in our app.
## How StockPile approaches it's challenges:
#### Best Education Hack
Our entire project is based around encouraging, simplifying and personalizing the learning process. We believe that everyone should have access to a learning resource that adapts to them while providing them with a gentle yet complete introduction to investing.
#### MLH Best Use of Google Cloud
Our project uses some google services at it's core.
- GCP App Engine - We can use app engine to host our react frontend and some of our backend.
- GCP Cloud Functions - We can use Cloud Functions to quickly create microservices for different servies, such as backend for fetching stock chart data from FinnHub.
- GCP Compute Engine - To host a CMS for the learn page content, and to host instance of CockroachDB
- GCP Firebase Authentication to authenticate users securely.
- GCP Recommendations AI - Used with other statistical operations to analyze a user's portfolio and present them with articles/tutorials best suited for them in the learn section.
#### MLH Best Use of CockroachDB
CockroachDB is a distributed SQL database - one that can scale. We understand that buying/selling stocks is transactional in nature, and there is no better solution that using a SQL database. Addditionally, we can use CockroachDB as a timeseries database - this allows us to effectively cache stock price data so we can optimze costs of new requests to our stock quote API.
|
## 🤯 Inspiration
As busy and broke college students, we’re usually missing semi-essential items. Most of us just suffer a little and just go without, but what if there was an alternative? Say you need a vacuum. More often than not, someone living in your hall has one they aren’t opposed to sharing! Building upon this principle, our app aims to **connect** “haves” with “have-nots” and create a closer community along the way.
## 🧐 What it does
Our app provides an easy-use platform for students to share favors between each other; two clear use-cases are borrowing items and running convenience store errands. In addition, this application encourages tighter communities and helps reduce consumerist waste(not everyone in a dorm hall needs their own of everything!).
## 🥸 How we built it
* **Frontend**: built in React Native with Expo, run on Xcode simulator
* **Backend** : authentication with Firebase, Typescript, TypeORM, GraphQL used to power Node server with Apollo editor to communicate with CockroachDB.
* **Design and UI**: Figma and Google Slide
* **Pitching** : Loom and Adobe Premiere
## 😅 Challenges we ran into
* We were unable to find a UI/UX designer for our team and initially struggled with getting the project off the ground. Heather dedicated most of her time filling that role by learning how to operate Figma and tried her very best to make an aesthetically pretty mock-up and final pitch.
* It was also difficult to work through many time zones and keep track of all members; we lost a backend person in last minute so Hung stepped up to the challenge to learn GraphQL, CockroachDB, and TypeORM in a really short time.
* Of course scope
## 😊 Accomplishments that we're proud of
* Heather is super proud of surviving her first hackathon and having her idea finally somewhat come to life! She also now realizes how much there is left to learn and is excited to explore more into UI/UX design and what goes into developing a mobile app.
* Hung somehow managed to implement React Native App with expo, GraphQL & Node server in less than 24 hours
## 🤔 What we learned
* We learned that having a reliable designer is super important, and how time moves super fast when you are having fun!
* Having a high bar is good but also terrifying :^(
## 😤 What's next for Favor App
We built a relatively functional minimum featured project over the past two days; however, we would like to implement GPS reliability and optimization algorithms in order to increase the amount of favors completed and make fulfilling favors easier. The ultimate goal is to tailor favor requests so fulfilling them doesn’t deviate from the helpers’ normal daily routines. We would also like to include more game-like features and other incentives. We could see ourselves using and relying on something like this a lot, so this hackathon will hopefully not be the end!
|
partial
|
## Inspiration
According to the CDC, cavities are one of the most prevalent chronic diseases of childhood in the US. About 1 out of 5 children aged 5 to 11 years have at least one untreated decayed tooth. Untreated cavities can cause pain and infections that affect every facet of children’s life, from eating, speaking, to playing and learning. Brushing teeth regularly has been proven to be the easiest and most effective way to help children develop strong healthy teeth. However, through many surveys conducted, a substantial number of children do not follow a regular routine of brushing their teeth and don’t know how to brush their teeth correctly. This particular problem can be attributed to the fact that children don’t find enjoyment in brushing their teeth and aren’t properly educated on how to brush teeth correctly.
Realizing those particular issues with children’s tooth brushing habits and inspired by Dr. Jesus Del Valle’s vision of “gamifying” healthcare, we decided to build Brushy. Our web application provides a fun and interactive learning environment to help children find enjoyment in the process of brushing their teeth.
## What it does
Our web application trained machine learning model to see if children are brushing their teeth correctly.
## How we built it
We used machine learning to train different motions of brushing teeth. React.js for building the web app, mongodb for storing scores, GCP serverless functions for setting up a REST api, tensorflowjs for the ML model
## Challenges we ran into
It's hard to sythensize our work as we live in different parts of the US.
## Accomplishments that we're proud of
We had a functional ML model
## What we learned
## What's next for Brushy
Increase User Interface experience
|
## Purpose:
Food waste is an extensive issue touching all across the globe; in fact, according to the UN Environmente Programme, approximately ⅓ of food produced for human consumption globally is lost or wasted annually (Made in CA). After perceiving this information, we were inspired to create a website that provides you with numerous zero-waste recipes by just scanning your grocery receipt. Our innovative website not only addresses the pressing concern of food waste, but also empowers individuals to make a meaningful impact in their own kitchens!
## General Information:
Interactive website that offers clients vast and meaningful alternatives to unsustainable cooking.
Benefits range from the reduction of food waste to enhancing and simplifying meal planning.
Our model is unique because it incorporates fast and easy-to-use technology (i.e. receipt scanning) which will provide users recipes within seconds, in comparison to traditional, tedious websites on the market that require users to manually input each ingredient, unnecessarily prolonging their stay.
## How we built it:
The frontend of our project was created with HTML and CSS, whereas the backend was created with Flask. Image recognition services were implemented using Google Cloud API.
We chose HTML because it is light weighted and fast to load, ensuring a splendid user interface experience. We chose CSS in view of the fact that it is time-saving due to its simple-to-use nature, as well as its striking ability to offer flexible positioning of design elements. As first-time hackers without much experience, we chose Flask in view of its simplicity and features, such as a built-in development server and quick debugger. Google Cloud API was pivotal in extracting information provided by the user because of its text recognition feature in its OCR tool, allowing us to center our model around grocery receipts.
## Challenges we ran into:
Learning HTML and CSS - 2 of our group members were relatively new to coding and had no experience in frontend web dev whatsoever!
Delegating tasks effectively between team members spending the night vs. going home - Constant collaboration over Discord was crucial!
Learning Google Cloud API - All of our group members were new to Google Cloud API, so simultaneously learning + implementing it within 36 hours was definitely challenging.
## Accomplishments that we're proud of:
As a complete beginner team, we are extremely ecstatic about our large-scale efforts and progress at Hack the 6ix! From learning web dev from scratch to experimenting with completely new frameworks to creating our personal logo to making and editing a video in under an hour, our experience has been nothing short of a rollercoaster ride. Although new to the field, we made sure to bravely tackle each challenge presented to us and give our best efforts throughout the hacking period, which can be exemplified by our choices to work long hours past midnight, ask mentors for advice when needed, and constantly improvise our front and backend for a more complete user interface experience!
## What’s next for Eco Eats:
YES, we’re not done just yet! Here are a few things that we think we can consolidate with our current idea of Eco Eats to make it a cut above!
A feature to take a photo of your receipt on the website
A feature to let users see other recipes with similar ingredients that can be substituted with the ones they have
Expand our idea to PC parts, so that we can offer clients possible ideas for custom PCs to assemble with their old receipts
|
team 17
## Inspiration
The idea for this project came from a desire to reduce the inefficiency of dog adoption. Out of 30,000 dogs taken in by shelters each year, only 50% get adopted and 15% end up euthanized. By streamlining the pet/owner matching process, we hope to increase both the number of happy dogs and happy families.
## What it does
#### Shelter side
Shelters create a profile for each dog. They add a picture and specify its name, its breed, its age, its gender, its activity level, and whether it is safe for kids to be around.
#### User side
Users specify their prefered breeds, activity level, age range, gender and whether they want a kid safe dog. Our algorithm then starts showing them appropriate dogs from shelters close to them. If they swipe right on a compatible dog, it's a match! They can schedule a visit with the shelter.
## How we built it
We imagined our brand identity and designed the UI with Figma. The front end was implemented using react. We coded the back end with express.js and node.js. It connects to a firestore database. The backend is hosted as a Docker container on a Google Cloud Run instance. The frontend is hosted on github pages. We registered the domain adoptagoodboy.online as part of the domain.com challenge. The descriptiveness of our domain name should help with SEO and user acquisition.
## Challenges we ran into
We ran into issues trying to update documents in our firestore database. Some methods in the firebase SDK did not behave as expected and we could not solve the issue in time. We were still able to implement some workaround on the front end for demo purposes.
## Accomplishments that we are proud of
This was our first time working on a full web app for a hackathon and transforming our ideas and designs into a real product felt very rewarding. We are proud that we were able to build a working prototype in the short time given.
## What we learned
This project gave us the opportunity to learn a lot about web development. Starting from a UI prototype and working from there. Since we coded the front end, the back end and refined the design all at the same time, effective and efficient cooperation was needed.
## What's next for Goodboy
This is the big question. The logical next step would be to form a relationship with dog shelters since they are the backbone of our project. This would also give us some visibility with their clients. We could also work on extending Goodboy to other pets such as cats and turtles.
|
losing
|
## Inspiration
As college students who are on a budget when traveling from school to the airport, or from campus to a different city, we found it difficult to coordinate rides with other students. The facebook or groupme groupchats are always flooding with students scrambling to find people to carpool with at the last minute to save money.
## What it does
Ride Along finds and pre-schedules passengers who are headed between the same start and final location as each driver.
## How we built it
Built using Bubble.io framework. Utilized Google Maps API
## Challenges we ran into
Certain annoyances when using Bubble and figuring out how to use it. Had style issues with alignment, and certain functionalities were confusing at first and required debugging.
## Accomplishments that we're proud of
Using the Bubble framework properly and their built in backend data feature. Getting buttons and priority features implemented well, and having a decent MVP to present.
## What we learned
There are a lot of challenges when integrating multiple features together. Getting a proper workflow is tricky and takes lots of debugging and time.
## What's next for Ride Along
We want to get a Google Maps API key to properly be able to deploy the web app and be able to functionally use it. There are other features we wanted to implement, such as creating messages between users, etc.
|
## Inspiration
Parker was riding his bike down Commonwealth Avenue on his way to work this summer when a car pulled out of nowhere and hit his front tire. Lucky, he wasn't hurt, he saw his life flash before his eyes in that moment and it really left an impression on him. (His bike made it out okay as well, other than a bit of tire misalignment!)
As bikes become more and more ubiquitous as a mode of transportation in big cities with the growth of rental services and bike lanes, bike safety is more prevalent than ever.
## What it does
We designed \_ Bikeable \_ - a Boston directions app for bicyclists that uses machine learning to generate directions for users based on prior bike accidents in police reports. You simply enter your origin and destination, and Bikeable creates a path for you to follow that balances efficiency with safety. While it's comforting to know that you're on a safe path, we also incorporated heat maps, so you can see where the hotspots of bicycle theft and accidents occur, so you can be more well-informed in the future!
## How we built it
Bikeable is built in Google Cloud Platform's App Engine (GAE) and utilizes the best features three mapping apis: Google Maps, Here.com, and Leaflet to deliver directions in one seamless experience. Being build in GAE, Flask served as a solid bridge between a Python backend with machine learning algorithms and a HTML/JS frontend. Domain.com allowed us to get a cool domain name for our site and GCP allowed us to connect many small features quickly as well as host our database.
## Challenges we ran into
We ran into several challenges.
Right off the bat we were incredibly productive, and got a snappy UI up and running immediately through the accessible Google Maps API. We were off to an incredible start, but soon realized that the only effective way to best account for safety while maintaining maximum efficiency in travel time would be by highlighting clusters to steer waypoints away from. We realized that the Google Maps API would not be ideal for the ML in the back-end, simply because our avoidance algorithm did not work well with how the API is set up. We then decided on the HERE Maps API because of its unique ability to avoid areas in the algorithm. Once the front end for HERE Maps was developed, we soon attempted to deploy to Flask, only to find that JQuery somehow hindered our ability to view the physical map on our website. After hours of working through App Engine and Flask, we found a third map API/JS library called Leaflet that had much of the visual features we wanted. We ended up combining the best components of all three APIs to develop Bikeable over the past two days.
The second large challenge we ran into was the Cross-Origin Resource Sharing (CORS) errors that seemed to never end. In the final stretch of the hackathon we were getting ready to link our front and back end with json files, but we kept getting blocked by the CORS errors. After several hours of troubleshooting we realized our mistake of crossing through localhost and public domain, and kept deploying to test rather than running locally through flask.
## Accomplishments that we're proud of
We are incredibly proud of two things in particular.
Primarily, all of us worked on technologies and languages we had never touched before. This was an insanely productive hackathon, in that we honestly got to experience things that we never would have the confidence to even consider if we were not in such an environment. We're proud that we all stepped out of our comfort zone and developed something worthy of a pin on github.
We also were pretty impressed with what we were able to accomplish in the 36 hours. We set up multiple front ends, developed a full ML model complete with incredible data visualizations, and hosted on multiple different services. We also did not all know each other and the team chemistry that we had off the bat was astounding given that fact!
## What we learned
We learned BigQuery, NumPy, Scikit-learn, Google App Engine, Firebase, and Flask.
## What's next for Bikeable
Stay tuned! Or invest in us that works too :)
**Features that are to be implemented shortly and fairly easily given the current framework:**
* User reported incidents - like Waze for safe biking!
* Bike parking recommendations based on theft reports
* Large altitude increase avoidance to balance comfort with safety and efficiency.
|
## Inspiration
Flyers are everywhere around our campuses advertising various kinds of meetups and events. It's hard to keep track of them and often it's impossible to copy them down when in a rush. We provide a solution to that problem while also using weather and transportation data to supplement the flyer information so that you can stay informed and up to date on all the happenings around you!
## What it does
Our solution is a mobile app that allows users to take a picture of a flyer and automatically sync the event information with their calendar and eventually integrate with other conveniences of modern life. Our app will look at the event location and suggest a lyft ride if appropriate. It also notifies you of the weather forecast so that you can plan ahead.
## How we built it
We built out the backend of our app using Flask, and a frontend using iOS and React. The backend we built as an API to be agnostic to the frontend as Will and Michael continued to develop the mobile applications.
## Challenges we ran into
Over the weekend, we ran into two primary challenges: Platform integration and image segmentation. Although leveraging the Google Cloud Vision API made character detection much simpler, it was still difficult to associate each word with event fields it represents. We tried a few approaches while solving this and decided to use a combination of wit.ai, parsing, and geocoding techniques. We also had issues sharing images between mobile and server applications with low latency.
## Accomplishments that we're proud of
It works! Just like we'd imagined, we can use our phones to take pictures of fliers and add them to our calender's. Not only was it fulfilling to build, but we're also happy to go home with a nifty new utility.
## What we learned:
We learned how to debug POST/GET requests between our front-end and back-end as a team. In addition, we learned how to use Expo with React-Native as our development environment, as well as the ES6 syntax for async processes. We learned how to make calendar events for iOS. Finally, we learned how to use the Google Cloud Vision API to process our images to find important and relevant information.
|
partial
|
# The Ultimate Water Heater
February 2018
## Authors
This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan.
## About
Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource.
Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future.
Some key features we have:
* 90% energy efficiency
* An average rate of roughly 10 kW/hr of energy consumption
* Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure
* Clean and easily understood UI for typical household users
* Incorporation of the Internet of Things for convenience of use and versatility of application
* Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\*
* Cheap cost of installation and immediate returns on investment
## Inspiration
By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs.
So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better.
## How It Works
Our project essentially operates in several core parts running simulataneously:
* Arduino (101)
* Heating Mechanism
* Mobile Device Bluetooth User Interface
* Servers connecting to the IoT (and servicing via Alexa)
Repeat all processes simultaneously
The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity.
The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;)
The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger.
The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age.
## Built With:
(In no particular order of importance...)
* RhoAI
* R
* Balsamiq
* C++ (Arduino 101)
* Node.js
* Tears
* HTML
* Alexa API
* Swift, Xcode
* BLE
* Buckets and Water
* Java
* RXTX (Serial Communication Library)
* Mathematica
* MatLab (assistance)
* Red Bull, Soylent
* Tetrix (for support)
* Home Depot
* Electronics Express
* Breadboard, resistors, capacitors, jumper cables
* Arduino Digital Temperature Sensor (DS18B20)
* Electric Tape, Duct Tape
* Funnel, for testing
* Excel
* Javascript
* jQuery
* Intense Sleep Deprivation
* The wonderful support of the people around us, and TreeHacks as a whole. Thank you all!
\*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2>
Special thanks to our awesome friends Michelle and Darren for providing moral support in person!
|
## Inspiration 🔥
While on the way to CalHacks, we drove past a fire in Oakland Hills that had started just a few hours prior, meters away from I-580. Over the weekend, the fire quickly spread and ended up burning an area of 15 acres, damaging 2 homes and prompting 500 households to evacuate. This served as a harsh reminder that wildfires can and will start anywhere as long as few environmental conditions are met, and can have devastating effects on lives, property, and the environment.
*The following statistics are from the year 2020[1].*
**People:** Wildfires killed over 30 people in our home state of California. The pollution is set to shave off a year of life expectancy of CA residents in our most polluted counties if the trend continues.
**Property:** We sustained $19b in economic losses due to property damage.
**Environment:** Wildfires have made a significant impact on climate change. It was estimated that the smoke from CA wildfires made up 30% of the state’s greenhouse gas emissions. UChicago also found that “a single year of wildfire emissions is close to double emissions reductions achieved over 16 years.”
Right now (as of 10/20, 9:00AM): According to Cal Fire, there are 7 active wildfires that have scorched a total of approx. 120,000 acres.
[[1] - news.chicago.edu](https://news.uchicago.edu/story/wildfires-are-erasing-californias-climate-gains-research-shows)
## Our Solution: Canary 🐦🚨
Canary is an early wildfire detection system powered by an extensible, low-power, low-cost, low-maintenance sensor network solution. Each sensor in the network is placed in strategic locations in remote forest areas and records environmental data such as temperature and air quality, both of which can be used to detect fires. This data is forwarded through a WiFi link to a centrally-located satellite gateway computer. The gateway computer leverages a Monogoto Satellite NTN (graciously provided by Skylo) and receives all of the incoming sensor data from its local network, which is then relayed to a geostationary satellite. Back on Earth, we have a ground station dashboard that would be used by forest rangers and fire departments that receives the real-time sensor feed. Based on the locations and density of the sensors, we can effectively detect and localize a fire before it gets out of control.
## What Sets Canary Apart 💡
Current satellite-based solutions include Google’s FireSat and NASA’s GOES satellite network. These systems rely on high-quality **imagery** to localize the fires, quite literally a ‘top-down’ approach. Google claims it can detect a fire the size of a classroom and notify emergency services in 20 minutes on average, while GOES reports a latency of 3 hours or more. We believe these existing solutions are not effective enough to prevent the disasters that constantly disrupt the lives of California residents as the fires get too big or the latency is too high before we are able to do anything about it. To address these concerns, we propose our ‘bottom-up’ approach, where we can deploy sensor networks on a single forest or area level and then extend them with more sensors and gateway computers as needed.
## Technology Details 🖥️
Each node in the network is equipped with an Arduino 101 that reads from a Grove temperature sensor. This is wired to an ESP8266 that has a WiFi module to forward the sensor data to the central gateway computer wirelessly. The gateway computer, using the Monogoto board, relays all of the sensor data to the geostationary satellite. On the ground, we have a UDP server running in Google Cloud that receives packets from the satellite and is hooked up to a Streamlit dashboard for data visualization.
## Challenges and Lessons 🗻
There were two main challenges to this project.
**Hardware limitations:** Our team as a whole is not very experienced with hardware, and setting everything up and getting the different components to talk to each other was difficult. We went through 3 Raspberry Pis, a couple Arduinos, different types of sensors, and even had to fashion our own voltage divider before arriving at the final product. Although it was disheartening at times to deal with these constant failures, knowing that we persevered and stepped out of our comfort zones is fulfilling.
**Satellite communications:** The communication proved to be tricky due to inconsistent timing between sending and receiving the packages. We went through various socket ids and ports to see if there were any patterns to the delays. Through our thorough documentation of steps taken, we were eventually able to recognize a pattern in when the packages were being sent and modify our code accordingly.
## What’s Next for Canary 🛰️
As we get access to better sensors and gain more experience working with hardware components (especially PCB design), the reliability of our systems will improve. We ran into a fair amount of obstacles with the Monogoto board in particular, but as it was announced as a development kit only a week ago, we have full faith that it will only get better in the future. Our vision is to see Canary used by park services and fire departments in the most remote areas of our beautiful forest landscapes in which our satellite-powered sensor network can overcome the limitations of cellular communication and existing fire detection solutions.
|
## Inspiration
Our inspiration for Smart Sprout came from our passion for both technology and gardening. We wanted to create a solution that not only makes plant care more convenient but also promotes sustainability by efficiently using water resources.
## What it does
Smart Sprout is an innovative self-watering plant system. It constantly monitors the moisture level in the soil and uses this data to intelligently dispense water to your plants. It ensures that your plants receive the right amount of water, preventing overwatering or underwatering. Additionally, it provides real-time moisture data, enabling you to track the health of your plants remotely.
## How we built it
We built Smart Sprout using a combination of hardware and software. The hardware includes sensors to measure soil moisture, an Arduino microcontroller to process data, and a motorized water dispenser to regulate watering. The software utilizes custom code to interface with the hardware, analyze moisture data, and provide a user-friendly interface for monitoring and control.
## Challenges we ran into
During the development of Smart Sprout, we encountered several challenges. One significant challenge was optimizing the water dispensing mechanism to ensure precise and efficient watering. The parts required by our team, such as a water pump, were not available. We also had to fine-tune the sensor calibration to provide accurate moisture readings, which took much more time than expected. Additionally, integrating the hardware with a user-friendly software interface posed its own set of challenges.
## Accomplishments that we're proud of
The rotating bottle, and mounting it. It has to be rotated such that the holes are on the top or bottom, as necessary, but the only motor we could find was barely powerful enough to turn it. We reduced friction on the other end by using a polygonal 3d-printed block, and mounted the motor opposite to it. Overall, finding an alternative to a water pump was something we are proud of.
## What we learned
As is often the case, moving parts are the most complicated, but we also are using the arduino for two things at the same time: driving the motor and writing to the display. Multitasking is a major component of modern operating systems, and it was interesting to work on it in this case here.
## What's next for Smart Sprout
The watering system could be improved. There exist valves that are meant to be electronically operated, or a human designed valve and a servo, which would allow us to link it to a municipal water system.
|
winning
|
## Inspiration:
Many people may find it difficult to understand the stock market, a complex system where ordinary people can take part in a company's success. This applies to those newly entering the adult world, as well as many others that haven't had the opportunity to learn. We want not only to introduce people to the importance of the stock market, but also to teach them the importance of saving money. When we heard that 44% of Americans have fewer than $400 in emergency savings, we felt compelled to take this mission to heart, with the increasing volatility of the world and of our environment today.
## What it does
With Prophet Profit, ordinary people can invest easily in the stock market. There's only one step - to input the amount of money you wish to invest. Using data and rankings provided by Goldman Sachs, we automatically invest the user's money for them. Users can track their investments in relation to market indicators such as the S&P 500, as well as see their progress toward different goals with physical value, such as being able to purchase an electric generator for times of emergency need.
## How we built it
Our front end is entirely built on HTML and CSS. This is a neat one-page scroller that allows the user to navigate by simply scrolling or using the navigation bar at the top. Our back end is written in JavaScript, integrating many APIs and services.
APIs that we used:
-Goldman Sachs Marquee
-IEX Cloud
Additional Resources:
-Yahoo Finance
## Challenges we ran into
The biggest challenge was the limited scope of the Goldman Sachs Marquee GIR Factor Profile Percentiles Mini API that we wanted to use. Although the data provided was high quality and useful, we had difficulties trying to put together a portfolio with the small amount of data provided. For many of us, it was also our first times using many of the tools and technologies that we employed in our project.
## Accomplishments that we're proud of
We're really, really proud that we were able to finish on time to the best of our abilities!
## What we learned
Through exploring financial APIs deeply, we not only learned about using the APIs, but also more about the financial world as a whole. We're glad to have had this opportunity to learn skills and gain knowledge outside the fields we typically work in.
## What's next for Prophet Profit
We'd love to use data for the entire stock market with present-day numbers instead of the historical data that we were limited to. This would improve our analyses and allow us to make suggestions to users in real-time. If this product were to realize, we'd need the ability to handle and trade with large amounts of money as well.
|


## Inspiration
We wanted to truly create a well-rounded platform for learning investing where transparency and collaboration is of utmost importance. With the growing influence of social media on the stock market, we wanted to create a tool where it will auto generate a list of recommended stocks based on its popularity. This feature is called Stock-R (coz it 'Stalks' the social media....get it?)
## What it does
This is an all in one platform where a user can find all the necessary stock market related resources (websites, videos, articles, podcasts, simulators etc) under a single roof. New investors can also learn from other more experienced investors in the platform through the use of the chatrooms or public stories. The Stock-R feature uses Natural Language processing and sentiment analysis to generate a list of popular and most talked about stocks on twitter and reddit.
## How we built it
We built this project using the MERN stack. The frontend is created using React. NodeJs and Express was used for the server and the Database was hosted on the cloud using MongoDB Atlas. We used various Google cloud APIs such as Google authentication, Cloud Natural Language for the sentiment analysis, and the app engine for deployment.
For the stock sentiment analysis, we used the Reddit and Twitter API to parse their respective social media platforms for instances where a stock/company was mentioned that instance was given a sentiment value via the IBM Watson Tone Analyzer.
For Reddit, popular subreddits such as r/wallstreetbets and r/pennystocks were parsed for the top 100 submissions. Each submission's title was compared to a list of 3600 stock tickers for a mention, and if found, then the submission's comment section was passed through the Tone Analyzer. Each comment was assigned a sentiment rating, the goal being to garner an average sentiment for the parent stock on a given day.
## Challenges we ran into
In terms of the Chat application interface, the integration between this application and main dashboard hub was a major issue as it was necessary to pull forward the users credentials without having them to re-login to their account. This issue was resolved by producing a new chat application which didn't require the need of credentials, and just a username for the chatroom. We deployed this chat application independent of the main platform with a microservices architecture.
On the back-end sentiment analysis, we ran into the issue of efficiently storing the comments parsed for each stock as the program iterated over hundreds of posts, commonly collecting further data on an already parsed stock. This issue was resolved by locally generating an average sentiment for each post and assigning that to a dictionary key-value pair. If a sentiment score was generated for multiple posts, the average were added to the existing value.
## Accomplishments that we're proud of
## What we learned
A few of the components that we were able to learn and touch base one were:
* REST APIs
* Reddit API
* React
* NodeJs
* Google-Cloud
* IBM Watson Tone Analyzer
-Web Sockets using Socket.io
-Google App Engine
## What's next for Stockhub
## Registered Domains:
-stockhub.online
-stockitup.online
-REST-api-inpeace.tech
-letslearntogether.online
## Beginner Hackers
This was the first Hackathon for 3/4 Hackers in our team
## Demo
The apply is fully functional and deployed using the custom domain. Please feel free to try it out and let us know if you have any questions.
<http://www.stockhub.online/>
|
## Inspiration
We have a problem! We have a new generation of broke philanthropists.
The majority of students do not have a lot of spare cash so it can be challenging for them to choose between investing in their own future or the causes that they believe in to build a better future for others.
On the other hand, large companies have the capital needed to make sizeable donations but many of these acts go unnoticed or quickly forgotten.
## What it does
What if I told you that there is a way to support your favourite charities while also saving money? Students no longer need to choose between investing and donating!
Giving tree changes how we think about investing. Giving tree focuses on a charity driven investment model providing the ability to indulge in philanthropy while still supporting your future financially.
We created a platform that connects students to companies that make donations to the charities that they are interested in. Students will be able to support charities they believe in by investing in companies that are driven to make donations to such causes.
Our mission is to encourage students to invest in companies that financially support the same causes they believe in. Students will be able to not only learn more about financial planning but also help support various charities and services.
## How we built it
### Backend
The backend of this application was built using python. In the backend, we were able to overcome one of our largest obstacles, that this concept has never been done before! We really struggled finding a database or API that would provide us with information on what companies were donating to which charities.
So, how did we overcome this? We wanted to avoid having to manually input the data we needed as this was not a sustainable solution. Additionally, we needed a way to get data dynamically. As time passes, companies will continue to donate and we needed recent and topical data.
Giving Tree overcomes these obstacles using a 4 step process:
1. Using a google search API, search for articles about companies donating to a specified category or charity.
2. Identify all the nouns in the header of the search result.
3. Using the nouns, look for companies with data in Yahoo Finance that have a strong likeness to the noun.
4. Get the financial data of the company mentioned in the article and return the financial data to the user.
This was one of our greatest accomplishments of this project. We were able to overcome and obstacle that almost made us want to do a different project. Although the algorithm can occasionally produce false positives, it works more often than not and allows for us to have a self-sustaining platform to build off of.
### Flask
```shell script
$ touch application.py
from flask import Flask
application = Flask(**name**)
@application.route('/')
def hello\_world():
return 'Hello World'
```
```shell script
$ export FLASK_APP="application.py"
$ flask run
```
Now runs locally:
<http://127.0.0.1:5000/>
### AWS Elastic Beanstalk
Create a Web Server Environment:
```shell script
AWS -> Services -> Elastic beanstalk
Create New Application called hack-western-8 using Python
Create New Environment called hack-western-8-env using Web Server Environment
```
### AWS CodePipeline
Link to Github for Continuous Deployment:
```shell script
Services -> Developer Tools -> CodePipeline
Create Pipeline called hack-western-8
GitHub Version 2 -> Connect to Github
Connection Name -> Install a New App -> Choose Repo Name -> Skip Build Stage -> Deploy to AWS Elastic Beanstalk
```
This link is no longer local:
<http://hack-western-8-env.eba-a5injkhs.us-east-1.elasticbeanstalk.com/>
### AWS Route 53
Register a Domain:
```shell script
Route 53 -> Registered Domains -> Register Domain -> hack-western-8.com -> Check
Route 53 -> Hosted zones -> Create Record -> Route Traffic to IPv4 Address -> Alias -> Elastic Beanstalk -> hack-western-8-env -> Create Records
Create another record but with alias www.
```
Now we can load the website using:<br/>
[hack-western-8.com](http://hack-western-8.com)<br/>
www.hack-western-8.com<br/>
http://hack-western-8.com<br/>
http://www.hack-western-8.com<br/>
Note that it says "Not Secure" beside the link<br/>
### AWS Certificate Manager
Add SSL to use HTTPS:
```shell script
AWS Certificate Manager -> Request a Public Certificate -> Domain Name "hack-western-8.com" and "*.hack-western-8.com" -> DNS validation -> Request
$ dig +short CNAME -> No Output? -> Certificate -> Domains -> Create Records in Route 53
Elastic Beanstalk -> Environments -> Configuration -> Capacity -> Enable Load Balancing
Load balancer -> Add listener -> Port 443 -> Protocol HTTPS -> SSL certificate -> Save -> Apply
```
Now we can load the website using:
<https://hack-western-8.com>
<https://www.hack-western-8.com>
Note that there is a lock icon beside the link to indicate that we are using a SSL certificate so we are secure
## Challenges we ran into
The most challenging part of the project was connecting the charities to the companies. We allowed the user to either type the charity name or choose a category that they would like to support. Once we knew what charity they are interested in, we could use this query to scrape information concerning donations from various companies and then display the stock information related to those companies. We were able to successfully complete this query and we can display the donations made by various companies in the command line, however further work would need to be done in order to display all of this information on the website. Despite these challenges, the current website is a great prototype and proof of concept!
## Accomplishments that we're proud of
We were able to successfully use the charity name or category to scrape information concerning donations from various companies. We not only tested our code locally, but also deployed this website on AWS using Elastic Beanstalk. We created a unique domain for the website and we made it secure through a SSL certificate.
## What we learned
We learned how to connect Flask to AWS, how to design an eye-catching website, how to create a logo using Photoshop and how to scrape information using APIs.
We also learned about thinking outside the box. To find the data we needed we approached the problem from several different angles. We looked for ways to see what companies were giving to charities, where charities were receiving their money, how to minimize false positives in our search algorithm, and how to overcome seemingly impossible obstacles.
## What's next for Giving Tree
Currently, students have 6 categories they can choose from, in the future we would be able to divide them into more specific sub-categories in order to get a better query and find charities that more closely align with their interests.
Health
- Medical Research
- Mental Health
- Physical Health
- Infectious Diseases
Environment
- Ocean Conservation
- Disaster Relief
- Natural Resources
- Rainforest Sustainability
- Global Warming
Human Rights
- Women's Rights
- Children
Community Development
- Housing
- Poverty
- Water
- Sanitation
- Hunger
Education
- Literacy
- After School Programs
- Scholarships
Animals
- Animal Cruelty
- Animal Health
- Wildlife Habitats
We would also want to connect the front and back end.
|
partial
|
## Inspiration
The number of new pharmaceutical drugs approved by the FDA has been declining steadily whilst the cost and timeframe required to deliver new drugs to market have exponentially increased. In response to the increasingly difficult task of discovering new drugs for life-threatening diseases, we propose an online platform—novogen.ai—that allows individuals to query and devise combinations of unique molecules to serve as a basis for generating novel molecules with desired chemical descriptors.
## What it does
Novogen.ai is a web platform that empowers scientists with tools to generate novel compounds with desired chemical descriptors.
## How we built it
With great difficulty. Our team split into our divisions, frontend, backend and A.I and individually built the components, then later worked closely together to join all our relevant components.
## Challenges we ran into
Chronic hallucinations induced by the absence of sleep. Installing our machine learning dependencies on a Google cloud VM (literally spent four hours typing pip install over and over again hoping for different results)! and lastly the challenging task of bringing together our individual components and making them work together.
## Accomplishments that we're proud of
One of our tasks involved developing our own search engine for our platform. We had to come up with creative ways to tackle this problem, and we're proud of the outcome.
## What we learned
We learned a lot working together as a team, much more about installing dependencies on a Google cloud VM and just how tricky it can be to tie an ML algorithm to a front and back end and host it online within 36 hours.
## What's next for novogen.ai
Novogen.ai will focus on refining it's ML and building out the tools of it's platform.
|
## Inspiration
According to the WHO, drug donation today is encountering serious waste in terms of both labor and medical products due to disorganized, massive amount of charity. 80% or donated drugs arrive unsolicited, unexpected, and most notably, unsorted. 62% come with labels in foreign languages that locals cannot decipher. Meanwhile, in other areas, drug cleanup fails to be completed appropriately and cause serious threats to the environment. Dr.Pill was then devised to simplify the medicinal identification process so people, ranging from those simply curious about the pills lying around in the cabinet to those in dire need of quick medical assortment, could utilize the current medical resources for efficiently and sustain-ably.
## What it does
Dr.Phil uses image recognition to instantly identify medicinal pills, from over-the-counter to prescribed counterparts. Functional and consumption information after industry research was carefully selected among many from drugbank database, so contrary to traditional pill identifiers, Dr.Pill could provide much clearer and easier-to-follow insights. It also offers translation for international adoption, especially concerning the influence language barrier can build amidst drug donation.
## How I built it
We created a server using node.js and separately wrote python scripts to use the machine learning APIs (Google OCR, IBM Watson, Translator), then we executed the scripts within the node.js server and rendered the results on frontend templates.
## Challenges I ran into
Fetching data from drugbank and drugs.com was a challenging experience. Certain important drug-related information such as storage methods were inaccessibly as well.
## Accomplishments that I'm proud of
We are proud to have given an attempt towards the life-sciences field for the first time. It was quite different and the topic was very rewarding.
## What I learned
We fortified our experience on using computer vision APIs.
## What's next for Dr.Pill
We hope to add advanced details such as storage details, actual dosages, and voice-interaction functionalities.
Improvement in UX would be another viable integration with better readability of details.
|
## Inspiration
We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing.
## What it does
CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation.
## How we built it
"Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move.
## Challenges we ran into
The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print.
The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server.
The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi.
## Accomplishments that we're proud of
We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data.
## What we learned
Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency.
## What's next for CloudChaser
In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them.
## CloudChaser in the news!
<https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad>
<https://penntechreview.com/read/cloudchaser>
|
partial
|
## Inspiration
After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world.
## What it does
Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together.
Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location.
By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing.
## How We built it
The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down.
To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques.
Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
|
## Inspiration
Philadelphia, like many urban cities, is grappling with rising temperatures due to climate change, industrialization, and the urban heat island effect. We noticed that extreme heat is making it unsafe for many communities, especially during summer months. Chilladelphia was inspired by the need to provide residents with real-time resources and actionable insights to help them stay cool and safe.
## What it does
Help cool down Philly! The main page features a heat map that visually highlights the hottest and coolest areas around Philadelphia. By entering your address, you can instantly see how “chill” your neighborhood is. Using our computer vision algorithm, we analyze the ratio of greenery in your area, giving you a personalized chill rating. This rating helps you understand the immediate state of your environment. Chilladelphia goes beyond just information—it provides actionable suggestions like planting trees, painting rooftops lighter, and other eco-friendly tips to actively cool down your community. Plus, you can easily find nearby cooling centers, water stations, and shaded areas to help you beat the heat on the go
## How we built it
We built Chilladelphia with a strong focus on user experience and seamless access to location based data. For user authentication, we integrated **Propel Auth**, which provided a quick and scalable solution for user sign-ups and logins. This allowed us to securely manage user sessions, ensuring that personal data, like location preferences, is handled safely.
On the frontend, we used **React** to create a dynamic and responsive user interface. This enabled smooth interactions, from entering an address to viewing real-time temperature and air quality updates. To style the app, we utilized **Tailwind CSS**, which allowed us to rapidly prototype and design components with minimal code. **Axios** was implemented for handling API requests, efficiently fetching environmental data and user-specific suggestions. The frontend also leverages **React Router** to manage navigation, making it easy for users to explore different parts of the app.
For the backend, we set up a **Node.js** server with **Express** to handle API requests and data routing. The core of our data storage is **MongoDB**, where we store geospatial information like cooling center locations and tree-planting sites. MongoDB’s flexibility allowed us to efficiently store and query data based on the user’s location. We also integrated external APIs to get coordinates and map data. To manage authentication securely across both the backend and frontend, we utilized **Propel Auth** to handle user session tokens and login states.
For the data generation, we used python to compile images of university city by downloading sections of university city from sattelite images. We then use DetecTrees, a Python library that uses a pre-trained model to identify tree pixels from aerial images. We then were able to calculate what percentage of the image was green space to give users an idea of how green the area around them is.
## Challenges we ran into
One of the biggest challenges was getting high resolution satellite imagery that would work well for our purposes. After testing out over 5 different APIs, we ended up having to wrap a google maps scraper, which worked best for our needs.
## Accomplishments that we're proud of
We’re proud of creating a solution that can have real impact in our neighboring Philly communities. The recent heat waves in the northeast have been dangerous and put our peers and community at risk, and we are excited to take steps in the right direction to mitigate the issue.
## What we learned
We've expanded our tech stack -- several of us used MongoDB, Express.js, PropelAuth, and many other tools for the first time this weekend.
## What's next for Chilladelphia
Next, we plan to scale Chilladelphia by integrating more data - we had limited storage in our database and weren't able to cover as much of Philly as we wanted to, but we hope to do more in the future! We also want to partner with local governments and environmental organizations to further expand the app's resource database and promote city-wide efforts in cooling down Philadelphia.
|
## Inspiration
Reflecting on 2020, we were challenged with a lot of new experiences, such as online school. Hearing a lot of stories from our friends, as well as our own experiences, doing everything from home can be very distracting. Looking at a computer screen for such a long period of time can be difficult for many as well, and ultimately it's hard to maintain a consistent level of motivation. We wanted to create an application that helped to increase productivity through incentives.
## What it does
Our project is a functional to-do list application that also serves as a 5v5 multiplayer game. Players create a todo list of their own, and each completed task grants "todo points" that they can allocate towards their attributes (physical attack, physical defense, special attack, special defense, speed). However, tasks that are not completed serve as a punishment by reducing todo points.
Once everyone is ready, the team of 5 will be matched up against another team of 5 with a preview of everyone's stats. Clicking "Start Game" will run the stats through our algorithm that will determine a winner based on whichever team does more damage as a whole. While the game is extremely simple, it is effective in that players aren't distracted by the game itself because they would only need to spend a few minutes on the application. Furthermore, a team-based situation also provides incentive as you don't want to be the "slacker".
## How we built it
We used the Django framework, as it is our second time using it and we wanted to gain some additional practice. Therefore, the languages we used were Python for the backend, HTML and CSS for the frontend, as well as some SCSS.
## Challenges we ran into
As we all worked on different parts of the app, it was a challenge linking everything together. We also wanted to add many things to the game, such as additional in-game rewards, but unfortunately didn't have enough time to implement those.
## Accomplishments that we're proud of
As it is only our second hackathon, we're proud that we could create something fully functioning that connects many different parts together. We spent a good amount of time on the UI as well, so we're pretty proud of that. Finally, creating a game is something that was all outside of our comfort zone, so while our game is extremely simple, we're glad to see that it works.
## What we learned
We learned that game design is hard. It's hard to create an algorithm that is truly balanced (there's probably a way to figure out in our game which stat is by far the best to invest in), and we had doubts about how our application would do if we actually released it, if people would be inclined to play it or not.
## What's next for Battle To-Do
Firstly, we would look to create the registration functionality, so that player data can be generated. After that, we would look at improving the overall styling of the application. Finally, we would revisit game design - looking at how to improve the algorithm to make it more balanced, adding in-game rewards for more incentive for players to play, and looking at ways to add complexity. For example, we would look at implementing a feature where tasks that are not completed within a certain time frame leads to a reduction of todo points.
|
winning
|
## Inspiration
Our spark to tackle this project was ignited by a teammate's immersive internship at a prestigious cardiovascular research society, where they served as a dedicated data engineer. Their firsthand encounters with the intricacies of healthcare data management and the pressing need for innovative solutions led us to the product we present to you here.
Additionally, our team members drew motivation from a collective passion for pushing the boundaries of generative AI and natural language processing. As technology enthusiasts, we were collectively driven to harness the power of AI to revolutionize the healthcare sector, ensuring that our work would have a lasting impact on improving patient care and research.
With these varied sources of inspiration fueling our project, we embarked on a mission to develop a cutting-edge application that seamlessly integrates AI and healthcare data, ultimately paving the way for advancements in data analysis and processing with generative AI in the healthcare sector.
## What it does
Fluxus is an end to end workspace for data processing and analytics for healthcare workers. We leverage LLMs to translate text to SQL. The model is preprocessed to specifically handle Intersystems IRIS SQL syntax. We chose Intersystems as our database for storing electronic health records (EHRs) because this enabled us to leverage their integratedML queries. Not only can healthcare workers generate fully functional SQL queries for their datasets with simple text prompts, they now can perform instantaneous predictive analysis on datasets with no effort. The power of AI is incredible isn't it.
For example, a user can simply type in "Calculate the average BMI for children and youth from the Body Measures table." and our app will output
"SELECT AVG(BMXBMI) FROM P\_BMX WHERE BMDSTATS = '1';"
and you can simply run it on the built in intersystems database. With Intersystems IntegratedML, with the simple input of "create a model named DemographicsPrediction to predict the language of ACASI Interview based on age and marital status from the Demographics table.", our app will output
"CREATE MODEL DemographicsPrediction PREDICTING (AIALANGA) FROM P\_DEMO TRAIN MODEL DemographicsPrediction VALIDATE MODEL DemographicsPrediction FROM P\_DEMO SELECT \* FROM INFORMATION\_SCHEMA.ML\_VALIDATION\_METRICS;"
to instantly create train and validate an ML model that you can perform predictive analysis on with integratedML's "PREDICT" command. It's THAT simple!
Researchers and medical professionals working with big data now don't need to worry about the intricacies of SQL syntax, the obscurity of healthcare record formatting - column names and table names that do not give much information, and the need to manually dive into large datasets to find what they're looking for. With simple text prompts data processing becomes a no effort task, and predictive modelling with ML models becomes equally as effortless. See how tables come together without having to browse through large datasets with our DAG visualizations of connected tables/schemas.
## How we built it
Our project incorporated a multitude of components that went into the development. It was both overwhelming, but also satisfying seeing so many parts come together.
Frontend: The frontend was developed in Vue.js and utilized many modern day component libraries to give off a friendly UI. We also incorporated a visualization tool using third party graph libraries to draw directed acyclic graph (DAG) workflows between tables, showing the connection from one table to another that has been developed after querying the original table. To show this workflow in real time, we implemented a SQL parser API (node-sql-parser) to get a list of source tables used in the LLM generated query and used the DAGs to visually represent the list of source tables in connection to the newly modified/created table.
Backend: We used Flask for the backend of our web service, handling multiple API endpoints from our data sources and LLM/prompt engineering functionality.
Intersystems: We connected an IRIS intersystems database to our application and loaded it with a load of healthcare data leveraging intersystems libraries for connectors with Python.
LLMs: We originally started looking into OpenAI's codex models and their integration, but ultimately worked with GPT-3.5 turbo which made it easy to fine-tune our data (to a certain degree) so our LLM could detect prompts and generate syntactically accurate queries with a high degree of accuracy. We wrapped the LLM and preprocessing of prompt engineering features as an API endpoint to integrate with our backend.
## Challenges we ran into
* LLMs are not as magical as they look. There was nothing for us to train the kind of datasets that are used in healthcare. We had to manually push entire database schemas for our LLM to recognize and to attempt to fine-tune on in order to get queries that were accurate. This was intensive manual labour and a lot of frustrating failures with trying to fine-tune on both current and legacy LLM models provided by OpenAI. Ultimately we came to a promising result that delivered a solid degree of accuracy with some fine-tuning.
* Integrating everything together - putting together countless API endpoints (honestly felt like writing production code at a certain point), hosting to our frontend, wrapping the LLM as an API endpoint. Ultimately there's definitely pain points that still need to be addressed, and we plan to make this a long term project that will help us identify bottlenecks that we didn't have time to address within these 24 hours, while simultaneously expanding on our application.
## Accomplishments that we're proud of
We were all aware of how much we aimed to get done in a mere span of 24 hours. It seemed near impossible. But we were all on a mission, and had the drive to bring a whole new experience to data analytics and processing to the healthcare industry by leveraging the incredible power of generative AI. The satisfaction of seeing our LLM work, trying to fine-tune manually configured data hundreds of lines long and having it accurately give us queries for IRIS including integratedML queries, the frontend come to life, the countless API endpoints work and the integration of all our services for an application with high levels of functionality. Our team came together from different parts of the globe for this hackathon, but we were warriors that instantly clicked as a team and made the most of these past 24 hours by powered through day and night to deliver this product.
## What we learned
Just how insane AI honestly is.
A lot about SQL syntax, working with Intersystems, the highs and lows of generative AI, about all there is to know about current natural language to SQL processes leveraging generative AI thanks to like 5+ research papers.
## What's next for Fluxus
* Develop an admin platform so users can put in their own datasets
* Fine-tune the LLM for larger schemas and more prompts
* buying a hard drive
|
## Inspiration
When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body.
As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately.
## What it does
The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals.
## How we built it
Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality.
## Challenges we ran into
The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave.
## Accomplishments that we're proud of
We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying.
## What we learned
One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people.
## What's next for SkinSkan
We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
|
## Inspiration
Our inspiration came from the frustration of managing medical data from multiple doctors. One of our teammates has a Primary Care Physician at MIT and another back home. When she attempted to compare lab results and diagnoses from both doctors, she found herself logging into separate portals and sifting through reports to locate the necessary information. We aimed to simplify this process and provide individuals with a unified solution for better healthcare decisions.
## What it does
Our solution streamlines medical data management, unifies advice from different healthcare providers, and empowers individuals to make informed decisions about their health.
## How we built it
We built our solution using a combination of technologies, including ReactJS, ChartJS, ChakraUI for the frontend, and Python (and Python packages) for data parsing. We were also in the process of training an NLP model to analyze multiple doctor reports for comprehensive insights.
## Challenges we ran into
On the technical side, creating a user-friendly dashboard and handling diverse data formats required careful consideration. Additionally, in working on the NLP models, we noticed it was quite intricate and not easily achievable. On the personal side, we were combatting against health issues that developed during the day which affected the productivity of the team members.
## Accomplishments that we're proud of
We're proud of designing a user-friendly, comprehensive healthcare solution that simplifies medical data management. We also implemented a payment feature. Our team's dedication to addressing complex challenges and working on valuable product is a significant accomplishment.
## What we learned
Throughout the development process, we gained insights into the importance of personalizing healthcare solutions. We understood better how to use React.js and others effectively. We also improved our technical skills in data parsing and NLP.
## What's next for Fusion
In the future, we plan to provide more extensive analytics, including vital signs and predictive health modeling. Our goal is to continue enhancing our solution to empower individuals with even more comprehensive healthcare insights.
|
winning
|
## Inspiration
The bitalino system is a great new advance in affordable, do-it-yourself biosignals technology. Using this technology, we want to make an application that provides an educational tool to exploring how the human body works.
## What it does
Currently, it uses the ServerBIT architecture to get ECG signals from a connected bitalino and draw them in an HTML file real time using javascript. In this hack, the smoothie.js library was used instead of the jQuery flot to provide smoother plotting.
## How I built it
I built the Lubdub Club using Hugo Silva's ServerBIT architecture. From that, the ECG data was drawn using smoothie.js. A lot of work was put in to make a good and accurate ECG display, which is why smoothie was used instead of flot. Other work involved adjusting for the correct ECG units, and optimizing the scroll speed and scale of the plot.
## Challenges I ran into
The biggest challenge we ran into was getting the Python API to work. There are a lot more dependencies for it than is written in the documentation, but that may be because I was using a regular Python installation on Windows. I installed WinPython to make sure most of the math libraries (pylab. numpy) were installed, and installed everything else afterwards. In addition, there is a problem with server where the TCP listening will not close properly, which caused a lot of trouble in testing.
Apart from that, getting a good ECG signal was very challenging, as testing was done using electrode leads on the hands, which admittedly would give a signal that is quite susceptible to interference (both from surrounding electronics and movements). ALthough we never got an ECG signal close to the ones in the demos online, we did end up with a signal that was definitely an ECG, and had recognizable PQRS phases.
## Accomplishments that I'm proud of
I am proud that we were able to get the Python API working with the bitalino, as it seems that many others at Hack Western 2 were unable to. In addition, I am happy with the way the smoothie.js plot came out, and I think it is a great improvement over the original flot plot.
Although we did not have time to set up a demo site, I am quite proud of the name our team came up with (lubdub.club).
## What I learned
I learned a lot of Javascript, jQuery, Python, and getting ECG signals from less than optimal electrode configurations.
## What's next for Lubdub Club
What's next is to implement some form of wave-signal analysis to clean up the ECG waveform, and to perform calculations to find values like heart rate. Also, I would like to make the Python API / ServerBIT easier to use (maybe rewrite from scratch or at least collect all dependencies in an installer). Other things include adding more features to the HTML site, like changing colour to match heartrate, music, and more educational content. I would like to set up lubdub.club, and maybe find a way to have the data from the bitalino sent to the cloud and then displayed on the webpage.
|
## Inspiration
Navigating an ever-evolving world of technology is challenging for many seniors. As many health applications and communications move online, seniors may feel that they are losing touch with their personal health. Blood Pressure Buddy helps seniors to be independent and proactive about their cardiovascular health.
## What it does
Blood Pressure Buddy is a blood pressure tracker targeted toward senior users. Through voice dictation, it records blood pressure readings from an external blood pressure monitor. The date and time of the readings are saved into a table. Readings are interpreted, and users are notified if their blood pressure is good or bad. Users can view weekly and monthly overall trends through graphs of their blood pressure on the Statistics page. The Learn More page features concise, useful advice on how to improve cardiovascular health.
Tailored to senior citizens, Blood Pressure Buddy has key features that ease the use of technology. Users can easily access the website through a chrome extension instead of having to look up the URL each time. Our website features large, sans serif fonts for optimum readability. As many seniors are not proficient at typing on a keyboard, and many suffer from arthritis which reduces their dexterity, the voice-to-text feature allows them to record their blood pressure hands-free! They can select their most familiar language for the voice-to-text.
## How we built it
**Front-end:** The website and chrome extension are built in HTML, CSS, and JavaScript. We created an HTML form complete with csrf tokens for the login page, which sent a POST request to the backend with user information. We implemented the voice dictation feature through a Web Speech voice-to-text API to interpret English (very well) and (quite poorly). The graphs are built in JavaScript and styled in CSS using the CanvaJS library.
**Back-end:**The back-end is hosted on a DigitalOcean droplet running Nginx and Gunicorn. The backend user authentication is achieved using django and we have an SQLite database being used to store user's data.
## Challenges we ran into
We ran into difficulties with linking the Add Data and Statistics page to the navigation bar. We also struggled with configuring Gunicorn to communicate with the Nginx server as it was new for everyone. As some of us are new to GitHub, we spent some time troubleshooting errors with pushing and pulling changes.
## Accomplishments that we're proud of
We are proud of the website's senior-friendly design. Our favourite part is the Add Data page and the voice-to-text feature. We are also quite proud of creating an actual server and connecting the back and front end together, considering all of us are quite new to full stack development!
## What we learned
Our team learned a lot in the process of making Blood Pressure Buddy. Kirsten is a beginner to coding, so she learned a lot about making graphs in JavaScript. This was Andrea's first time working with HTML/CSS/JavaScript, so she learned a lot of syntax and how to style web pages. Kyle gained more confidence with configuring a backend server using Django. Lavan learned about Git commands, jQuery, and JavaScript functions.
## What's next for Blood Pressure Buddy
We want the graphs in Statistics to correspond to the user's inputted data by connecting the data input to the backend database and displaying said data in the graphs. We will also change the API used for voice-to-text because the voice detection for languages other than English did not work well with the API we used. Blood Pressure Buddy also needs a registration page.
|
*("Heart Tempo", not "Hear Tempo", fyi)*
## Inspiration
David had an internship at the National Institute of Health over the summer, where he researched the effect of auditory stimulus such as music on microcirculation (particularly the myogenic and endothelial bands), using Laser Doppler Flowmetry (LDF) to do so. This experiment all stems from the known fact that the human body often matches its heartrate with the tempo of a song that is playing.
Though that side of things was heavily researched, the opposite wasn't. And for that reason, David developed the idea of making the tempo of the song change relative to the heartrate of the person listening to the song, rather than vice-versa.
## What it does
This Android app will connect to your Android Wear device (with a heartbeat sensor) and send this heart rate to a server which modifies the tempo of any song to regulate your heart rate at normal levels.
By regulating your heart rate, it will reduce anxiety and stress, allowing you to relax and not worry about the pressures of life. The best part? You only need a smartwatch and smartphone, no fancy equipment.
(This is where the name comes from, if you haven't figured that out already)
## How we built it
We had to figure out how to get the heart rate from an Android Wear watch, which didn't take that long. The hard part was actually figuring out how to send the data from the watch to the phone then to the server. We ended up using the native `WearableListenerService` to send the data to the phone which then sent the data with `OkHttp` to our Node.js server. This server will connect with any MIDI-enabled application on your machine to change the BPM.
## Challenges we ran into
It took a damn while to figure out how to send data between the watch and the phone, why does Google make this so hard?!
## Accomplishments that we're proud of
When we first got the phone to **actually send data** to the server, we were very happy and wanted to do more with the project.
## What we learned
We learned more about Android development, an area we both wanted to get into. It was slightly difficult since we both come from web development backgrounds and the concepts are very different.
## What's next for HearTempo
Some sort of logging and data analysis, definitely. We want to prove that this works, so we will perhaps implement a log of your average heart beat over a course of a week or two after you start using it.
|
partial
|
## Inspiration
Have you ever wondered or experienced a world without colour? What is life like being unable to see the surroundings, the people and do entertainment? In this case, hearing becomes an essential sense, which provides people with vision disabilities a unique way to experience the world. In addition, with the growing smartphone penetration rate and the forced stay-at-home caused by the pandemic, more people have gradually become isolated from society. As a team, we genuinely believe that social interaction is a crucial factor in achieving better health. Sonux is here to help build both online and offline connectivity, especially for those experiencing blindness and/or visual impairment. Through playing this game, players will not only gain a satisfying acoustic sensory experience but also encourage more people to stay away from the screen and appreciate the scenery of different communities, of diverse cultures and mother nature, leading to better health outcomes, mentally and physically.
## What it does
Sonux is a mobile application designed for anyone looking to relax through screen breaks while gaining a whole experience of auditory sensation. The application is user-friendly, in which people with blindness and/or visual impairment will achieve the same experience as other players. Once players launch the game, the phone’s screen will be in a darkened mode, and players can barely see any displays. The only way to navigate and select game options is through swiping, knocking, or simply waving their phones. As players explore around, Sonux will sense their surrounding environment and search for sounds. At distinct locations, Sonux is able to capture different soundtracks; it can be a lyric of a song, a piece of classical music, the sound of nature or a message from someone unknown. Players are, therefore, able to store all the sounds that they encountered during their journey. As users explore more places, travel to further locations, they will discover diverse sounds. The game enables players to expand their maps to other parts of the world and unlock more soundtracks. Furthermore, players are encouraged to leave warm and positive messages that other users will collect. With multiple sounds collected, users could always return to their sound archive, re-listen to all the sounds, and make various combinations of harmonies.
## How we built it
Unity Engine
Sonux is developed based on the Unity engine to enable the best 3-D acoustic experience. We started by creating a 3-D space with natural objects modelled inside. Each element has its unique sound and lighting effects, guiding the player's journey. Unlike traditional games where navigation is achieved through screen touches, Sonux utilizes hand gestures, device movement and haptic feedback. By creating a 3-D environment for players to explore, Doppler Effects is achieved virtually by varying sound channels and volume levels, maximizing Sonux's acoustic experience.
Natural Language Processing
We employed Unity's voice recognition API with a sentiment analysis model to analyze players' appropriateness of message input. After translating the player's voice message into texts, we process the text message using the textBlob API, a pre-trained NLP model, to determine a sentiment score of the message. To factor in the potential inconsistency of the model, messages indicating harmful contents will be filtered and further reviewed by our staff.
## Challenges we ran into
One of the biggest challenges we ran into was in our application of Unity. Unity is the most suitable real-time game development platform in our case to build the audio game for mobile devices. However, our group has encountered several debugging problems as we lack previous experience and skill set with this tool. We have gone through multiple tutorials, navigated the platform thoroughly, and self-taught the fundamental skills needed to develop Sonux. Nevertheless, when coding for the game to be able to listen and react to voice instructions, we failed the first few trials. It resulted from Mac laptops not supporting certain sound features. Thus, we switched to a PC laptop and adjusted our code to successfully make the voice recognition function. Recognizing this bug was one of the big moments for the project.
## Accomplishments that we're proud of
The biggest accomplishment we believe is that as a team, we have worked well together. With some members from a tech background and another from a media information major, we had very thoughtful and insightful discussions in the project brainstorming stage. In which all members have identified and agreed upon the need for mobile games to be more inclusive to the special player group and be more considerate towards players' health. Through working cohesively, we have turned our idea from sketch to an actual presentable model that is able to include all the features we have drafted.
## What we learned
We learned how to use Unity, how the platform works, and the foundation to make mobile games. Members with a tech background have gained a better understanding of the coding process for game development and how to navigate game engines. For the member who does not have previous tech/science knowledge, it was a fascinating experience to see how the games played throughout daily lives come into being. In addition, all members have gained well-rounded perspectives by seeing how different people think and approach problem-solving differently.
## What's next for SONUX
Sonux comes alive after evaluating both the practical components and considering the ethical and social responsibility that a mobile game company should take on to achieve long-term sustainability. Sonux hopes to raise more attention and care to the visual disability population while also creating a more healthy lifestyle for the ever-increasing mobile game players. With these goals in mind, Sonux will continue to build a more connected, long-lasting and inclusive player community.
|
## Inspiration
Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world.
## What it does
Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation.
We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box.
## How we built it
For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives:
1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model.
2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users.
* Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience.
* Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data.
* Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app.
* NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users.
* Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences
## Challenges we ran into
Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL.
## Accomplishments that we're proud of
We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are:
* Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community.
* Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement.
* User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority.
* Speech Recognition: Our success in implementing speech recognition technology is a source of pride.
* Multiple AI Models: We also loved merging natural language processing and computer vision in the same application.
## What we learned
We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project.
## What's next?
We have a ton of ideas planned for Sign Sync next!
* Translate between languages other than English
* Translate between other sign languages, not just ASL
* Native mobile app with no internet access required for more seamless usage
* Usage of more sophisticated datasets that can recognize words and not just letters
* Use a video image to demonstrate the sign language component, instead of static images
|
## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless
|
partial
|
## Inspiration
Social networks are fascinating by their rapid growth and their aiblity to provide openly accessbile data.
## What it does
The Beaker notebook examines the relationship between stock markets and the average emotional score (positive is 1, negative is -1).
## How we built it
We collected and compared the data of 8 companies like Amazon, Google, Netflix both from the NASDAQ API and the tweets that mentionned these names. Some statistical tests were conducted to assess the correlation levels.
## Challenges we ran into
The Twitter API has some serious limitations for this kind of task (e.g., timed queries)
## Accomplishments that we're proud of
Although not new, the idea to use social networks as approximation to real industries can help researchers quantify the confidence with which they rely on alternative sources of information.
## What we learned
In many cases, phone applications can be useful for some specific tasks. However, as this project has shown us, web apps and their APIs tend to be faster, lighter and easier to access for developpers and/or data scientists.
## What's next for Feeling Big Data:sentimental analysis of stock markets
Although only a proof of concept for now, the stock market sentimental analysis can be conducted even further by fitting a Hidden Markov Model, often used in time series analysis and forecast, to assess its performance on the twitter data.
|


## Inspiration
We wanted to truly create a well-rounded platform for learning investing where transparency and collaboration is of utmost importance. With the growing influence of social media on the stock market, we wanted to create a tool where it will auto generate a list of recommended stocks based on its popularity. This feature is called Stock-R (coz it 'Stalks' the social media....get it?)
## What it does
This is an all in one platform where a user can find all the necessary stock market related resources (websites, videos, articles, podcasts, simulators etc) under a single roof. New investors can also learn from other more experienced investors in the platform through the use of the chatrooms or public stories. The Stock-R feature uses Natural Language processing and sentiment analysis to generate a list of popular and most talked about stocks on twitter and reddit.
## How we built it
We built this project using the MERN stack. The frontend is created using React. NodeJs and Express was used for the server and the Database was hosted on the cloud using MongoDB Atlas. We used various Google cloud APIs such as Google authentication, Cloud Natural Language for the sentiment analysis, and the app engine for deployment.
For the stock sentiment analysis, we used the Reddit and Twitter API to parse their respective social media platforms for instances where a stock/company was mentioned that instance was given a sentiment value via the IBM Watson Tone Analyzer.
For Reddit, popular subreddits such as r/wallstreetbets and r/pennystocks were parsed for the top 100 submissions. Each submission's title was compared to a list of 3600 stock tickers for a mention, and if found, then the submission's comment section was passed through the Tone Analyzer. Each comment was assigned a sentiment rating, the goal being to garner an average sentiment for the parent stock on a given day.
## Challenges we ran into
In terms of the Chat application interface, the integration between this application and main dashboard hub was a major issue as it was necessary to pull forward the users credentials without having them to re-login to their account. This issue was resolved by producing a new chat application which didn't require the need of credentials, and just a username for the chatroom. We deployed this chat application independent of the main platform with a microservices architecture.
On the back-end sentiment analysis, we ran into the issue of efficiently storing the comments parsed for each stock as the program iterated over hundreds of posts, commonly collecting further data on an already parsed stock. This issue was resolved by locally generating an average sentiment for each post and assigning that to a dictionary key-value pair. If a sentiment score was generated for multiple posts, the average were added to the existing value.
## Accomplishments that we're proud of
## What we learned
A few of the components that we were able to learn and touch base one were:
* REST APIs
* Reddit API
* React
* NodeJs
* Google-Cloud
* IBM Watson Tone Analyzer
-Web Sockets using Socket.io
-Google App Engine
## What's next for Stockhub
## Registered Domains:
-stockhub.online
-stockitup.online
-REST-api-inpeace.tech
-letslearntogether.online
## Beginner Hackers
This was the first Hackathon for 3/4 Hackers in our team
## Demo
The apply is fully functional and deployed using the custom domain. Please feel free to try it out and let us know if you have any questions.
<http://www.stockhub.online/>
|
## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
|
partial
|
## Inspiration
Having struggled with depression in the past, we wanted to build a tool that could help people in that situation detect it early and give them the tools they need to get healthy again.
## What it does
Our Chrome extension uses Lexalytics' Semantria API to detect when our users have a bad day, and bombard them with cuteness when they do. Additionally, we can detect the early signs of depression and direct our users to our website that features a variety of ressources to help them.
## How we built it
We used a Chrome extension to track messages and web searches from a user, which would send data to Semantria API for lexical analysis. The returned sentiment value would be recorded and pooled over the course of the day/week/month to detect a person's negativity.
## Challenges we ran into
Having never worked with PyMongo before, connecting with and figuring out the queries for MongoDB was challenging. We had a hard time figuring out the logic behind compressing and filtering the raw data to predict a person's mood. We also had challenges integrating the Semantria API, and in the end we were only able to successfully install in on one of our computers. Luckily, that was enough for us to integrate it with our server and build the project successfully!
## Accomplishments that we're proud of
This was the first time for all of us building a Chrome extension and using Python/Flask as a back-end, so we're proud to have built something that actually runs smoothly!
## What we learned
We learned just how powerful the Semantria API actually is when it comes to sentiment analysis, giving us a sentiment score precise to the hundredth of a unit. We also learned a lot about building a Python back-end and connecting it to a Mongo database.
## What's next for LemonAid
Given the ressources, we plan on adding additional metrics to help detect the early symptoms of depression, such as tracking time spent on social media or the amount of Facebook conversations our users engage in, both of which are directly correlated to depression. We would also like to use these tools with the Semantria API to help detect other mental illnesses such as bipolar and anxiety disorder.
|
## What it does
Lil' Learners is a fun new alternative to learning tools for students in grades ranging from kindergarten to early elementary school. Allow for Teachers to create classes for their students and take note of the learning, strengths and weaknesses of their students as well as allowing for teachers and parents to track the progress of students. Students are assigned classes based on what each of their teachers needs them to practice and are presented with a variety(in the future) of interactive and fun games that take the teachers notes and generates questions which would be presented through the form of games. Students gain points based on how many questions they get right while playing, and get incentive to keep playing and in turn studying by allowing them to own virtual islands that they can customize to their liking by buying cosmetic items with the points earned from studying.
## How we built it
Using OAuth and a MongoDB database, Lil' Learners is a Flask based web application that runs on a structural backbone that is the accounts and courses class hierarchy. We created classes and separated all the types of accounts and courses, and created functions that check for duplicate accounts through both username and email and automatically save accounts to the database or courses to teachers and students or even children to their parents upon instantiation. On the front end, Lil' learners makes use of flask, html and css to create a visually appealing and interactive GUI and web interface. Through the use
## Challenges we ran into
Some challenges were making auth0 work with our log in system that we developed, along with one of the biggest setbacks being with 3.js model that we wanted to create to show off the island that each student owns in an interactive and cool looking way, but despite working at it for several hours, the apis and similar documentation for displaying the 3d models in a flask and html environment seemed to be a lost cause.
## Accomplishments that we're proud of
We are super proud of Lil Learners because despite the various different types of softwares and new/old skills that needed to me learned and merged together for it to work, we managed to create something that we could show off and works to convey the proof of concept for our idea
## What we learned:
We learned a lot about the interactions between various different software and how to integrate them together. Through the process of making Lil' learners we had the opportunity to try out the data management and back end development, and general software development skills with MongoDB, OAuth and GoDaddy and learn how they work and interact with other elements in a web application.
## What's next for Lil' Learners
We are hoping to be able to expand Lil learner's capacities further such as finishing up the 3.js models, fully integrating the OAuth with our account systems, launching our web app onto our go daddy domain, creating a larger variety of games and also providing better visualizations for the statistics for students along with better employments of the points and adaptive learning systems.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
partial
|
## Inspiration
How did you feel when you first sat behind the driving wheel? Scared? Excited? All of us on the team felt a similar way: nervous. Nervous that we'll drive too slow and have cars honk at us from behind. Or nervous that we'll crash into something or someone. We felt that this was something that most people encountered, and given the current technology and opportunity, this was the perfect chance to create a solution that can help inexperienced drivers.
## What it does
Drovo records average speed and composite jerk (the first derivative of acceleration with respect to time) over the course of a driver's trip. From this data, it determines a driving grade based on the results of a SVM machine learning model.
## How I built it
The technology making up Drovo can be summarized in three core components: the Android app, machine learning model, and Ford head unit. Interaction can start from either the Android app or Ford head unit. Once a trip is started, the Android app will compile data from its own accelerometer and multiple features from the Ford head unit which it will feed to a SVM machine learning model. The results of the analysis will be summarized with a single driving letter grade which will be read out to the user, surfaced to the head unit, and shown on the device.
## Challenges I ran into
Much of the hackathon was spent learning how to properly integrate our Android app and machine learning model with the Ford head unit via smart device link. This led to multiple challenges along the way such as figuring out how to properly communicate from the main Android activity to the smart device link service and from the service to the head unit via RPC.
## Accomplishments that I'm proud of
We are proud that we were able to make a fully connected user experience that enables interaction from multiple user interfaces such as the phone, Ford head unit, or voice.
## What I learned
We learned how to work with smart device link, various new Android techniques, and vehicle infotainment systems.
## What's next for Drovo
We think that Drovo should be more than just a one time measurement of driving skills. We are thinking of keeping track of your previous trips to see how your driving skills have changed over time. We would also like to return the vehicle data we analyzed to highlight specific periods of bad driving.
Beyond that, we think Drovo could be a great incentive for teenage drivers to be proud of good driving. By implementing a social leaderboard, users can see their friends' driving grades, which will in turn motivate them to increase their own driving skills.
|
## Inspiration:
Our inspiration for this app comes from the critical need to improve road safety and assess driver competence, especially under various road conditions. The alarming statistics on road accidents and fatalities, including those caused by distracted driving and poor road conditions, highlight the urgency of addressing this issue. We were inspired to create a solution that leverages technology to enhance driver competence and reduce accidents.
## What it does
Our app has a frontend, which connects to a GPS signal, which tracks the acceleration of a given car, as well as its speed. Such a React frontend also encompasses a Map, as well as a record feature, which, through the implementation of a LLM by Cohere, is capable of detecting alerting police, in the event of any speech that may be violent, or hateful, given road conditions.
On the backend, we have numerous algorithms and computer vision, that were fine-tuned upon YOLOv5 and YOLOv8. These models take in an image through a camera feed, surrounding cars, the color of the surrounding traffic lights, and the size of the car plates in front of the drivers.
By detecting car plates, we are able to infer the acceleration of a car (based on the change in size of the car plates), and are able to asses the driver's habits. By checking for red lights, correlated with the GPS data, we are able to determine a driver's reaction time, and can give a rating for a driver's capacities.
Finally, an eye-tracking model is able to determine a driver's concentration, and focus on the road.
All this paired with its interactive mobile app makes our app the ultimate replacement for any classic dashcam, and protects the driver from the road's hazards.
|
## Inspiration
We're always reminded of how scary carbon emission can be on a larger scale, but we never know exactly how much we really contribute. Without individuals knowing about the effects they produce on the environment, we fall into the risk of following the model of "Tragedy of the Commons" - an idea that an individual will disregard environmental concerns for the sake of him or herself because of his or her perceived low contribution to environmental damage. With this regard, we aimed to work on a program that would provide more awareness and allow the user to become more "conscious" about his or her actions.
## What it does
The application can break down into multiple parts: AC Recommender, Overall Consumption Stats, and Acceleration Checker.
**AC Recommender**
The AC recommender will suggest on the head of the vehicle whether he or she should turn on AC, returning a recommendation that depends on whether the needs of the user are obtained. Given an input of the desired vehicle temperature and current vehicle speed, the AC Recommender will first calculate the temperature wind index to find the effects of what temperature can be reached in the vehicle if the windows are down. For instance, if the reduced temperature is below the desired temperature threshold and the vehicle speed below 55 mph, then AC Recommender will tell the user to roll down their windows. This 55 mph represents the max speed a vehicle can travel before AC is more fuel-efficient (due to the increased drag of the vehicle).
**Overall Consumption Stats**
This part of the application displays the total amount of fuel consumed during an entire trip. Furthermore, the application will report the total carbon footprint in kg emission of CO2. This type of emission data allows the user to be more aware of their trip during driving, seeing the immediate and profound effects of long-term travel.
**Acceleration Checker**
This part of the application keeps track of how much the user accelerates over a certain threshold throughout his or her entire trip. The knowledge of rapid accelerations will contribute towards providing user feedback on optimizing driving to overall reduce fuel consumption.
## How we built it
We utilized the Ford API to retrieve different vehicle data that we then used to output useful statistics and considerations on the program UI on the head of the vehicle. All of this was done in android studio.
## Challenges we ran into
Due to limited time and being first-time hackers, we did not have time to implement everything we wanted. Furthermore, understanding Ford's API was an initial challenge, but a fun experience nonetheless. We eventually overcame the initial challenge and appreciated the ease of data retrieval and the use of the API.
## Accomplishments that we're proud of
We are proud of the ideas and options we've made for the program. Additionally, we are proud that our program is able to provide information and motivation to the user to further continue themselves or inspire others about CO2 emissions.
## What we learned
We learned how to use an API and how profound vehicle emission truly is.
## What's next for Econscious
We want to add and modify two more program features to Econscious.
For the accelerator checker, we would extend its functionality to also include rapid braking checking. The acceleration and braking data will be used to provide an additional energy consumption statistic that reflects the effects of driving too aggressively. Further, the feature would better advise the user on how to drive more efficiently to save as much fuel as possible while maintaining a reasonable speed.
An additional feature we would like to add is the Idle-Time checker. This feature will measure the total time the user vehicle is idle, but running, during his or her entire trip. The feature would also include the total emission during this idle time to further provide the user information on the effects of idling to be more conscious of carbon output.
Lastly, we are aiming to implement the above features as a mobile android app that would retain all this data for future reference that will be useful for tracking progress.
|
winning
|
## Inspiration
**According to the Alzheimer’s Association, 6.7 million Americans age 65 and older are living with Alzheimer's in 2023.** Alzheimer’s is a gradually progressive brain disorder involving memory loss. It is the most common form of dementia; people forget who their loved ones are and cannot carry out daily tasks anymore. Alzheimer’s and dementia is not only a personal health crisis but impacts family, friends, and caregivers. It is important to focus on the prevention and slow progression of symptoms related to memory loss. **How can we use the Memory Palace – a psychology-based technique where people can associate mnemonic images in their mind to places they know – to help prevent and ease the lives of those with Alzheimer’s and dementia?**
There is a lot of psychology research focused on memory and learning that can be used to guide technical applications and enhance memory performance. Our team is very interested in memory and learning mechanisms, which have inspired our idea to use scientific background to improve memory.
## What it does
**Memory Playground is a web application that helps boost memory recall and retention through the Memory Palace technique, especially for senior citizens and those with Alzheimer’s and dementia.** The application allows users to pick a setting/environment and list out words that are related. Then, we create broad yet distinct categories for the words. From here, we have users practice classifying objects, allowing them to create visual mappings of images physically and mentally. This allows them to strengthen memory connections and enhance memory performance. We also give them other words that fall into those categories to expand on the established mental connections.
Memory Playground also uses an integration of zero-knowledge proofs. It stores uploaded data securely on servers and allows users to anonymously interact with the application without revealing personally identifiable information, which is ideal for those concerned about privacy.
## How we built it
We used the OpenAI API to prompt their GPT-4 model for category groupings and new objects. Then we used Together.AI's stable diffusion model for image generation. From there we connected the Python components to the web side using Fetch API. We built the web application with React, HTML, CSS, and JS. We used Flask to integrate the Python backend with the frontend.
## Challenges we ran into
1) Working with multiple servers
2) Integrating Flask with React
3) Learning to use multiple APIs to integrate various services
4) Implementing drag and drop functionality using use states
5) Limited credit and GPU usage
6) Utilizing multimodal machine learning models
## Accomplishments that we're proud of
We are proud of how we efficiently and quickly were able to understand and implement new technologies and concepts. We were new to Flask and a lot of the recent AI technology. We are also proud of how we worked as a team, ideation stages to product creation. Additionally, we are proud of how we were able to integrate multiple varying technologies.
## What we learned
We developed a lot of technical skills involving using APIs, prompt engineering, model optimization, Flask, managing multiple server applications, and web development. We also learned a lot about the practical applications of AI in healthcare towards a potential treatment of Alzheimer's and dementia. It is critical for us as a society to consider how we can *prevent*, not just treat, such diseases.
We also learned a lot about the engineering design process. Throughout the hackathon we went through research, ideation, designing, prototyping, and building phases. We gained a lot of skills through this process that helped us as we adapted to new technologies and grew our knowledge base.
## What's next for Memory Playground
1) Scaling: We are looking to make our application public and available to the community. This would involve cloud data storage (ex. Firestore) and increased efficiency to manage larger requests.
2) Speech-to-text recognition: We would like to implement a speech to text recognition model so that we can utilize verbal connections and improve accessibility.
|
# GREENTRaiL
## Inspiration
Hiking has exploded in popularity since the pandemic with more than 80 million Americans hiking in 2022 alone. There are many large mental and physical health benefits to hiking, however it can be daunting to select routes as a beginner. It is difficult to imagine how a route would feel before going on it, especially for those without past input.
In addition, many times hikers also don't take into account wildlife when choosing routes. Animals such as elks have also been shown to change behavior up to 1 mile away from hiking trails, and this has far reaching implications to the greater biosphere. With climate change being a threat to traditional migration paths, increased human activity can be detrimental to the already fragile patterns.
GREENTRaiL is an app that will give users personalized recommendations and help make hiking more eco-friendly.
## What it does
Using biometric and environmental data, GREENTRaiL recommends users hiking trails based on average statistics of others who have completed the hike and synthesizes difficulty ratings. It will also use migratory and wildlife data to suggest less obtrusive hikes to local migratory patterns.
## How we built it
UI/UX prototyping was sketched first traditionally, and then brought into Procreate to develop final color and brand identity. High fidelity wire-framing was then done on Figma, and then the final UI/UX was refined using those prototypes.
GREENTRaiL was coded using Swift and integrates terraAPI to get wearable data and aggregate data of all the people who have taken the past trail.
## Challenges we ran into
All of us were new to Swift, and one of us fully couldn't run Xcode on their computer. Our UX/UI designer had also never designed for IOS before either, so there was a bit of a learning curve. Our coders ran into a lot difficulty integrating the terra API into the code, as well as general problems with front end and back end integration.
## What we learned
We learned how to develop using Swift, prototype for IOS on Figma and integrate terraAPI.
## What's next for GREENTRaiL
Future areas of development include syncing with other nature apps such as iNaturalist's API and AllTrails to give the user even more comprehensive data on wildlife and qualitative description.
## Figma Design
<https://www.figma.com/file/S9wlv984UYBPaX8IiPqRJe/greentrAIl?type=design&node-id=2%3A87&mode=design&t=mIexhgpxiinAegGd-1>
## Technologies Used
     
|
## Inspiration
Alzheimer’s impacts millions worldwide, gradually eroding patients’ ability to recognize loved ones, creating emotional strain for both patients and families. Our project aims to bridge this emotional gap by simulating personal conversations that evoke familiarity, comfort, and connection. These calls stimulate memory recall and provide emotional support, helping families stay close even when separated by distance, time zones, or other obligations.
## What it does
Dear simulates conversations between Alzheimer’s patients and their family members using AI-generated voices, recreating familiar interactions to provide comfort and spark memories. The app leverages Cartesia for voice cloning and VAPI for outbound agent calls, building personalized voice agents for each family member. These agents are engineered to gently manage memory lapses and identity questions, ensuring every interaction feels natural and empathetic.
Family members can upload their voices, and the system automatically generates a unique agent for each one through VAPI. Over time, these agents, powered by their own large language models (LLMs), learn from interactions, creating increasingly personalized and meaningful conversations that strengthen emotional connections.
## How we built it
The frontend was built with Next.js and TailwindCSS, focusing on an intuitive, responsive design to ensure families can easily upload voices and initiate conversations. We connected multiple APIs to streamline the workflow, ensuring a smooth and engaging user experience.
Our backend was developed using Flask, which allowed us to efficiently handle API requests, manage voice data, and coordinate multiple services such as Cartesia and VAPI. The backend plays a crucial role in connecting the user-facing frontend with the voice cloning and call management APIs, ensuring a seamless experience.
## Challenges we ran into
The biggest challenge was managing complexity. As the idea evolved, we had to simplify our approach without sacrificing impact. Integrating VAPI and managing voice data at scale posed technical challenges, requiring creative problem-solving and iteration. Streamlining the agent-creation process became essential to deliver a seamless experience for users.
## Accomplishments that we're proud of
1. Successfully integrated multiple APIs to create a smooth user experience.
2. Overcame technical challenges to build and test functional voice agents within a short timeframe.
3. Developed a system that could redefine how Alzheimer’s patients connect with their families, promoting emotional well-being through meaningful conversations.
## What we learned
We learned the value of pivoting when things became overly complex. Instead of building everything from scratch, we utilized existing APIs to accelerate development. While we explored Speech-to-Speech models, we chose a Speech-to-Text/Text-to-Speech pipeline for efficiency during the hackathon. This approach allowed us to focus on delivering a working prototype while considering future enhancements.
## What's next for DEAR . . .
Our next goal is to implement a Speech-to-Speech solution for more natural, real-time conversations. As the agents interact more, they will accumulate context, improving memory stimulation and tracking emotional well-being over time. We also plan to enhance remote monitoring, enabling families to stay connected and informed about their loved one’s emotional health, even when they can’t be physically present.
## Tech Stack!
Next.js and TailwindCSS: Frontend development
VAPI: Agent creation and outbound call management
Cartesia: Voice cloning
Flask: Backend and API handling
|
partial
|
## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time.
|
## Inspiration
Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book.
Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars!
Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible.
## What it does
Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud.
## How we built it
**Software (Alex K)** -
On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS.
The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words.
Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation.
**Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses.
## Challenges we ran into
The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging!
Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time.
Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length.
We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together.
## Acknowledgements
The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
|
## Inspiration
On the bus ride to another hackathon, one of our teammates was trying to get some sleep, but was having trouble because of how complex and loud the sound of people in the bus was. This led to the idea that in sufficiently noisy environment, hearing could be just as descriptive and rich as seeing. Therefore to better enable people with visual impairments to be able to navigate and understand their environment, we created a piece of software that is able to describe and create an auditory map of ones environment.
## What it does
In a sentence, it uses machine vision to give individuals a kind of echo location. More specifically, one simply needs to hold their cell phone up, and the software will work to guide them using a 3D auditory map. The video feed is streamed over to a server where our modified version of the yolo9000 classification convolutional neural network identifies and localizes the objects of interest within the image. It will then return the position and name of each object back to ones phone. It also uses the Watson IBM api to further augment its readings by validating what objects are actually in the scene, and whether or not they have been misclassified.
From here, we make it seem as though each object essentially says its own name, so that individual can essentially create a spacial map of their environment just through audio cues. The sounds get quieter the further away the objects are, and the ratio of sounds between the left and right are also varied as the object moves around the use. The phone are records its orientation, and remembers where past objects were for a few seconds afterwards, even if it is no longer seeing them.
However, we also thought about where in everyday life you would want extra detail, and one aspect that stood out to us was faces. Generally, people use specific details on and individual's face to recognize them, so using microsoft's face recognition api, we added a feature that will allow our system to identify and follow friend and family by name. All one has to do is set up their face as a recognizable face, and they are now their own identifiable feature in one's personal system.
## What's next for SoundSight
This system could easily be further augmented with voice recognition and processing software that would allow for feedback that would allow for a much more natural experience. It could also be paired with a simple infrared imaging camera to be used to navigate during the night time, making it universally usable. A final idea for future improvement could be to further enhance the machine vision of the system, thereby maximizing its overall effectiveness
|
winning
|
# HouseMate
Roommate troubles? Someone forgot to take out the garbage AGAIN? Look no further! Everyone in your household can make an account and use the RoomMate web app to automatically schedule chores, get email/SMS reminders, and check for real time updates on other roommate's chore completion. If you bought something for the house, add the expense from your account and receive an automatic calculation of who is owed money at the end of the month. Enjoy your new household synergy.
## Running This Code
This project is a work in progress. It can be run by cloning this reposity and using cmd prompt (or equivalent) to navigate to the folder containing "server.js". Run the server by using the command "node server.js" and opening "<http://localhost:3000>" in your browser.
## Authors
Built at NWHacks 2018 by
* **Amber Donnelly** - [Website](https://amberdonnelly.github.io)
* **Cindy Zhang** - [GitHub](https://github.com/Cesium-Ice)
* **Rika SD** - [GitHub](https://github.com/rsd-2016)
|
# The Clapp
The Clapp was born out of a deep love, and a deep frustration. Alan was the organizer for the Simon Fraser Gaming Commitee; as a naïve youth he just wanted to connect people and enjoy a nice weekend inside playing video games together — but all he found was logistical nightmares, frustrated friends, and a deep, burning desire to right these injustices.
Clapp, the Competitive LAN App, is the answer to Alan's woes; developed with his long-time friend and colleague Alexei Alexeivich Popov the Second (most commonly referred to as *Alex*) — together they are *NumBits*. The two set out to build something that would answer the majority of LAN competitors' questions within no more than 2 taps on a screen. At the same time, Alan saw that despite its simplicity, the app should never put a ceiling on what a creative organizer could put together. The app was to walk a fine line between clarity for users, and power for admins and organizers.
The back-end was designed by our two heroes in conjunction: Alan's extensive knowledge of Scala, Play and Postgres made rapid iteration easy, which allowed Alex to map out schemas that were flexible, highly normalized, and easy to work with both front- and back-end.
The design of the app was to be nerd-friendly and light: Clapp's signature dark blue (nickname *Flawed Azure*) was a choice that took several hours of guesstimating hex colour-codes to discover, combining the dark themes of many a code-editor and the crisp contrast it provides on white. Elegant, large type in the San Francisco typeface is used widely in the app, with several derivatives of *Flawed Azure* being used to visually separate more important information.
By now you might be wondering "why *Flawed Azure*?". Well, to be completely honest with you, dear Reader, it's because Azure is Flawed. The first hurdle Alan and Alex had to overcome was that of lacking documentation, external ports, internal ports, and other things that sound like they belong on submarines. The two put their heads together and wept, as help from Microsoft did not save the day. At our heroes' darkest moment, a light went on somewhere in the dark, and using a little `ifconfig` magic the two formed a connection between Clapp and Server, without using Azure at all.
But the night was dark and full of terrors as each developer fought the demons of his respective platform. Swift's type-safety made working with JSON tedious and error-prone, but Alex pushed on. Through the fire and flames Alan carried Clapp, as conflagrations erupted on server cores with every new constraint and business requirement. And finally, seven hours after the setting of the sun, they reached the calm — the eye of the storm, if you will — wherein smooth sailing was restored, and the two young men, as if mad, broke out into fits of laughter over the conversations they heard around them. It was each developer for themselves, in that overly-lit room, and as sleep-deprivation-induced babel made its way through the campus #OH and #nwhacks2016 made their way through the interwebs.
The two did not sleep; but neither did they see the sun rise. The last day of their journey together was dark and grey, and their agile fingers did not type quite as quickly as they once had. Cognitive load and sore necks took over. Alex briefly nodded off and face-desked, waking up several other people in the room. It was a trying time to the finish line, but one our protagonists faced with wisdom and preparedness; they had seen what nwHacks was capable of, and they were prepared to take it on.
And it's through their friendship, and mutual dream of an easier way of organizing LAN Competitions, that the two are here today, to bring you Clapp.
|
## Inspiration
Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users.
## What it does
Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives.
The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising
## Persona
Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards.
## How we built it
We used : React, NodeJs, Firebase, HTML & Figma
## Challenges we ran into
We had a number of ideas but struggled to define the scope and topic for the project.
* Different design philosophies made it difficult to maintain consistent and cohesive design.
* Sharing resources was another difficulty due to the digital nature of this hackathon
* On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app.
* Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge.
## Accomplishments that we're proud of
* The use of harder languages including firebase and react hooks
* On the design side it was great to create a complete prototype of the vision of the app.
* Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time
## What we learned
* we learned how to meet each other’s needs in a virtual space
* The designers learned how to merge design philosophies
* How to manage time and work with others who are on different schedules
## What's next for Re:skale
Re:skale can be rescaled to include people of all gender and ages.
* More close integration with other financial institutions and credit card providers for better automation and prediction
* Physical receipt scanner feature for non-debt and credit payments
## Try our product
This is the link to a prototype app
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1>
This is a link for a prototype website
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
|
losing
|
## Inspiration
We always that moment when you're with your friends, have the time, but don't know what to do! Well, SAJE will remedy that.
## What it does
We are an easy-to-use website that will take your current location, interests, and generate a custom itinerary that will fill the time you have to kill. Based around the time interval you indicated, we will find events and other things for you to do in the local area - factoring in travel time.
## How we built it
This webapp was built using a `MEN` stack. The frameworks used include: MongoDB, Express, and Node.js. Outside of the basic infrastructure, multiple APIs were used to generate content (specifically events) for users. These APIs were Amadeus, Yelp, and Google-Directions.
## Challenges we ran into
Some challenges we ran into revolved around using APIs, reading documentation and getting acquainted to someone else's code. Merging the frontend and backend also proved to be tough as members had to find ways of integrating their individual components while ensuring all functionality was maintained.
## Accomplishments that we're proud of
We are proud of a final product that we legitimately think we could use!
## What we learned
We learned how to write recursive asynchronous fetch calls (trust me, after 16 straight hours of code, it's really exciting)! Outside of that we learned to use APIs effectively.
## What's next for SAJE Planning
In the future we can expand to include more customizable parameters, better form styling, or querying more APIs to be a true event aggregator.
|
## Inspiration
Inspired by the millions of students around the world who swear that if they were just in another country travelling their lives would be so much better. We assure you that the grass is not always greener on the other side.
## What it does
The website collects reviews on the worst restaurants, hotels, and attractions and creates the worst possible itinerary for each city around the world. Based on the user’s deepest desires, it outputs the worst possible places to be in their dream city. The site also hosts a chat, comment, and like component where you can discuss itineraries and particularly distasteful sites. In the spirit of social distancing but still bringing people together, the website also utilises a matching algorithm to match similar itineraries together, allowing you to find a travel buddy for the worst trip ever.
## How We built it
This project was built on sheer determination and iMovie.
In all seriousness, this project was built with a variety of different tools as we all brought our unique perspective to the table.
Front-end:
React
Typescript
Javascript
Back-end:
Node.js
Express.js
Firebase
Twilio API
Places API
Design:
Figma
For greater detail, please check out our video!
## Challenges we ran into
A challenge we encountered was finding the review data. It seems like no one wants to report on terrible establishments but we were able to find out a way to get the data we needed (shout out to the Places API). Another issue was connecting the front-end and the back-end but we did it woo-hoo.
## Accomplishments that I'm proud of
It works for the most part!
We worked together to create a fully functioning front end and had very smooth design to developer handoff, implemented machine learning algorithms for the second time ever, and created a web app in multiple languages!
## What we learned
Some restaurants are really disgusting. Oh, and we all learned a lot about full-stack development and how to integrate APIs and databases.
## What's next for Trinogo
Increase functionality and create new features. First, we get reviews on all the restaurants in the world, next, world domination.
|
## Inspiration
In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)!
## What it does
First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together.
## How we built it
The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe.
## Challenges we ran into
We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions.
## Accomplishments that we're proud of.
We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application.
## What we learned
Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators.
## What's next for culina
Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
|
partial
|
## Inspiration
We wanted to create a simple GUI for data visualization
## What it does
SalesView is a data visualization tool. Users can post their data, see graphical analysis and statistics.
## How I built it
Firebase Real Time Database, React, Node.js, Google Maps API, Heroku
## Challenges I ran into
Connecting firebase to the app, optimizing data parsing to reduce render time.
## Accomplishments that I'm proud of
## What I learned
Learn how to use firebase realtime database. Google maps API
## What's next for SalesView
|
## Inspiration
Part of what makes playing music so enjoyable is playing with other people and hearing how your part fits into the piece as a whole. For many non-professional musicians, finding the time to play with others is quite a challenge, and practicing on one's own, while beneficial, is not enough to fully master and enjoy a song.
## Our Solution
We envisioned a user-friendly program that would play along with you, just as a partner would. Using Azure voice recognizion, Duet With Me can adjust a song's tempo and start at any location to accompany you as you practice and learn a piece.
## Uniqueness
No products currently exists on the market that allows amateur musicians to play their part with the rest of the ensemble. Our solution allows you to upload and combine different support instruments to compliment your musical role without the inconvenience of purchasing expensive software.
## Benefits
By using Duet With Me you are able to hear how your part fits into the piece as a whole. Cooperative learning has been shown to improve several key attributes of aspiring musicians such as retention, learning transference and most importantly confidence. (D. W. Johnson & R. Johnson, 1999; D. W. Johnson et al., 2007; R. T. Johnson & D. W. Johnson, 1984, 2002)
|
## Inspiration
I have searched for data before and am underwhelmed by the number and accessibility of a large portion of the data on the web
## What it does
It provides a marketplace for buying and selling data through effectively analyzing the demand curve on certain queries, and adjusting the price per query accordingly
## How I built it
Written from scratch, set up a sql database to store the data, after it is uploaded as csv
## Challenges I ran into
reading large data streams efficently
## Accomplishments that I'm proud of
Its pretty cool that it works.
## What I learned
Nodejs is annoying to debug
## What's next for DataHub
added user support
an actual pay scheme
better security
|
partial
|
## Inspiration
We wanted to bring the fun experience of Prisma to the immersive world of VR.
## What it does
The user explores Google street view photospheres transformed by our app into the style of different artists.
## How we built it
We have a distributed server architecture comprising of a Node.js server and a Python server (which hosts our trained Torch models). The two run on different AWS instances and communicate with each other with the gRPC library. A latitude longitude request is sent to the Node.js server, which saves a photosphere to Amazon S3 and sends the link to the Python server which processes the image in the style of the artist and sends back an S3 image link. The Node.js server then sends the S3 image link to the VR app.
## Challenges we ran into
1) The photosphere images for VR are 16MB. Right now, deep learning models are capable of dealing with images that are 1MB.
2) Training and integrating a deep learning network into Python.
3) Decreasing the latency of the model in creating different styles which is difficult as it is an iterative optimization process.
|
## Inspiration
Virtual reality is a blooming technology. It has a bright future regardless of which sector we may use it for. From medical training (holographic human body analysis) to entertainment. Particularly for me, its more of an accessory that I believe one day our civilization may be obsessed upon like the television and internet.
## What it does
This app is capable of closing the border between the real and “supposed to be” scientific fantasy world. This app uses the concept of both Augmented reality along with Virtual Reality bringing use one step closer to a different kind of holograms. This app intact is much more superior at what it does than the $4000 Hololens. While Hololens thrives on its projector in the side of the lens and with a very narrow field of view. This app opens up the world of virtual and augmented world in a way which makes it way cheaper alternative than its counterpart along with many other possibilities that ranges from virtual observatory to confidential information transfer in the form of hologram.
## How I built it
Using Unity, Vuforia, Google Vr, Android Sdk and along with the support of google cardboard this app was designed in a way that opens up the world our own world like virtual reality along with augmented reality being turned in to accepted reality. For a while I had to work with the different aspect ratio of the object file respective to the surface and the target photo being properly detected by the camera. The connection and proper conversion of apk files took many trials due to rendering and processing speed of the app. Finally it took me 25 hours to achieve a successful prototype.
## Challenges I ran into
The hardest part was setting up the target image and detecting it. The problem was discovered after 3 hours of brute-forcing. The image used in the creation of target image file has significantly lower resolution than the one printed which led to the recognition of the app become sloppy and sometimes undetectable. Getting the app to work on VR cardboard wasn’t easy. Due to the coagulation of pixels and many other rendering errors and bugs, the output achieved from the simulations(which were perfectly normal) were drastically different form the ones in my android device. The object, most of the time, was hovering half a meter from the point of target image and sometimes was no where to be seen. This was fixed to relocating the image points my changing the co-ordinates.
## Accomplishments that I'm proud of
This is my first hackathon and I can’t believe how hard I have worked for this and finally completed my hack. The first day I was planning on an entirely different hack regarding fingerprint and security but I got in to deep trouble when I realized that hardware collection has fingerprint scanner but not the modules that are absolutely necessary for it to be even used. I had almost given up. I had little, almost none experience in apps development but I started learning the whole concept and idea from scratch slowly. Finally when I realized what I could do with VR hack, I pulled all nighters until I could present something worthy of this hackathon.
## What I learned
This whole week has been very educational. I recently bought macbook and realized how easy it is when I can use terminal in mac, run vuforia exclusively for making augmented reality and on the other hand how I had to pay to use macbook version of adobe 123d catch and windows one was free but most importantly how two finger swipe across the touchpad cost me my whole night of writing a different version of this essay/cover letter. All jokes aside. I learned a lot about app designing, use of different api, learned how to deal with bugs in apps development . Nonetheless I learned about many of the technologies that I never knew existed like galvanic vestibular stimulation.
## What's next for Virtual-Augmented Reality
This projects is merely a prototype compared to the future this idea has. With the upcoming technologies capable of photogrammetry will give rise to higher quality obj file/3D file which if incorporated in augmented reality using camera vr which confuse anyone about what is real or what is fantasy. People can actually see their wildest dreams bring a new era in our society.
|
## Inspiration
We wanted to give virtual reality a purpose, while pushing its limits and making it a fun experience for the user.
## What it does
Our game immerses the user in the middle of an asteroid belt. The user is accompanied by a gunner, and the two players must work together to complete the course in as little time as possible. Player 1 drives the spacecraft using a stationary bike with embedded sensors that provide real-time input to the VR engine. Player 2 controls uses a wireless game controller to blow up asteroids and clear the way to the finish.
## How we built it
Our entire system relies on a FireBase server for inter-device communication. Our bike hardware uses a potentiometer and hall-effect sensor running on an Arduino to measure the turn-state and RPMs of the bike. This data is continuously streamed to the FireBase server, where it can be retrieved by the virtual reality engine. Player 1 and Player 2 constantly exchange game state information over the FireBase server to synchronize their virtual reality experiences with virtually no latency.
We had the option to use Unity for our 3D engine, but instead we used the SmokyBay 3D Engine (which was developed from scratch by Magnus Johnson). We chose to use Magnus' engine because it allowed us to more easily at support for FireBase, and additional hardware.
## Challenges we ran into
We spent a large amount of time trying to arrive at the correct configuration of hardware for our application. In particular, we spent many hours working with the Particle Photon before realizing that it's high level of latency makes it unsuitable for real time applications. We had no prior experience with FireBase, and spent a lot of time integrating it into our project, but it ultimately turned out to be a very elegant solution.
## Accomplishments that we're proud of
We are most proud of the integration aspect of our project. We had to incorporate many sensors, 2 iPhones, a FireBase database, and a game controller into a holistic virtual reality experience. This was in many ways frustrating, but ultimately very rewarding.
## What we learned
In retrospect, it would have been very helpful to have a more complete understanding of the hardware available to us and it's limitations.
## What's next for TourDeMarsVR
Add more sensors and potentially integrating Leap Motion instead of hand held gaming pad.
|
losing
|
## Inspiration
Many individuals lack financial freedom, and this stems from poor spending skills. As a result, our group wanted to create something to help prevent that. We realized how difficult it can be to track the expenses of each individual person in a family. As humans, we tend to lose track of what we purchase and spend money on. Inspired, we wanted to create an app that stops all that by allowing individuals to strengthen their organization and budgeting skills.
## What It Does
Track is an expense tracker website targeting households and individuals with the aim of easing people’s lives while also allowing them to gain essential skills. Imagine not having to worry about tracking your expenses all while learning how to budget and be well organized.
The website has two key components:
* Family Expense Tracker:
The family expense tracker is the `main dashboard` for all users. It showcases each individual family member’s total expenses while also displaying the expenses through categories. Both members and owners of the family can access this screen. Members can be added to the owner’s family via a household key which is only given access to the owner of the family. Permissions vary between both members and owners. Owners gain access to each individual’s personal expense tracker, while members have only access to their own personal expense tracker.
* Personal Expense Tracker:
The personal expense tracker is assigned to each user, displaying their own expenses. Users are allowed to look at past expenses from the start of the account to the present time. They are also allowed to add expenses with a click of a button.
## How We Built It
* Utilized the MERN (MongoDB, Express, React, Node) stack
* Restful APIs were built using Node and Express which were integrated with a MongoDB database
* The Frontend was built with the use of vanilla React and Tailwind CSS
## Challenges We Ran Into
* Frontend:
Connecting EmailJS to the help form
Retrieving specific data from the backend and displaying pop-ups accordingly
Keeping the theme consistent while also ensuring that the layout and dimensions didn’t overlap or wrap
Creating hover animations for buttons and messages
* Backend:
Embedded objects were not being correctly updated - needed to learn about storing references to objects and populating the references
Designing the backend based on frontend requirements and the overall goal of the website
## Accomplishments We’re Proud Of
As this was all of our’s first or second hackathons we are proud to have created a functioning website with a fully integrated front and back-end.
We are glad to have successfully implemented pop-ups for each individual expense category that displays past expenses.
Overall, we are proud of ourselves for being able to create a product that can be used in our day-to-day lives in a short period of time.
## What We Learned
* How to properly use embedded objects so that any changes to the object are reflected wherever the object is embedded
* Using the state hook in ReactJS
* Successfully and effectively using React Routers
* How to work together virtually. It allowed us to not only gain hard skills but also enhance our soft skills such as teamwork and communication.
## What’s Next For Track
* Implement an income tracker section allowing the user to get a bigger picture of their overall net income
* Be able to edit and delete both expenses and users
* Store historical data to allow the use of data analysis graphs to provide predictions and recommendations.
* Allow users to create their own categories rather than the assigned ones
* Setting up different levels of permission to allow people to view other family member’s usage
|
## Inspiration
Around 43.3% of NFT users are victims of NFT fraud. To prevent this and benefit society, we created a publicly available website, nftlaundromat.tech, where you can see and track NFT fraudsters. This way, NFT fraudsters will hesitate to commit fraud in the future as they know they would be publicly shamed. Thus, healthier NFT space would be created.
## What it does
It pulls the publicly available data from NFT wallets and extracts all the users who committed wash trading or rug pulling. We identified the fraudsters using the machine learning graph theory algorithm we developed based on past research papers. To clarify, rug pulling is a scam promoting a crypto token via social media. After the price has been driven up, the scammer sells, and the price generally falls to zero. On the other hand, wash trading is dishonest to drive up the price of NFTs by the buyer and seller. The buyer and seller can sell the piece back and forth to drive the cost but only publicly report the first sale. The money and NFT are returned to the original seller in the following exchange. Users can go to our website, read about each fraudster, and then shame the fraudster's social account with a simple click on the button.
## How we built it
We first read research papers on NFT rug pulling and wash trading. After, that we improved the algorithm by shifting the identification of the fraudsters into graph theory. We extracted the data using SQL queries from the transpose.io. After extracting the data, we ran our algorithm to identify all the fraudsters. We store the data in Firebase, and show it to the users in front-end using JS.
## Challenges we ran into
There is a very little research in the space of NFT rug pulling and wash trading. It took us few hours to improve the algorithm and to improve the accuracy of existing algorithms for the identification. Algorithms written in research papers were unclear, and not working. To develop our algorithms, we first had to optimize finding all the cycles in the graph, and then identifiying what does fall within the standard deviation.
## Accomplishments that we're proud of
Teamwork and team energy was up to the maximum level, which helped us developed the project. Diversity of the whole team and different backgrounds played a huge role in our accomplishment. In just 36 hours, we managed to improve the algorithm from the research that took a few years. Furthermore, each part of the team had to deal with the parts of the project that were not the most comfortable parts for them, which helped us learn a lot.
## What we learned
We learned that we can definitely continue to build and deploy this project fully. There are so many externalities that need to be taken into account to achieve perfect accuracy score. We learned that APIs are not that easy to integrate into the application, and that graph theory come in handy so often.
## What's next for NFT Laundromat
The next steps are:
-> Improve UI/UX
-> Identify more externalities and add them into the algorithm
-> Train the algorithm through machine learning to tweak the parameters
-> Market the product to reach wider audience
|
## Inspiration
The first step of our development process was conducting user interviews with University students within our social circles. When asked of some recently developed pain points, 40% of respondents stated that grocery shopping has become increasingly stressful and difficult with the ongoing COVID-19 pandemic. The respondents also stated that some motivations included a loss of disposable time (due to an increase in workload from online learning), tight spending budgets, and fear of exposure to covid-19.
While developing our product strategy, we realized that a significant pain point in grocery shopping is the process of price-checking between different stores. This process would either require the user to visit each store (in-person and/or online) and check the inventory and manually price check. Consolidated platforms to help with grocery list generation and payment do not exist in the market today - as such, we decided to explore this idea.
**What does G.e.o.r.g.e stand for? : Grocery Examiner Organizer Registrator Generator (for) Everyone**
## What it does
The high-level workflow can be broken down into three major components:
1: Python (flask) and Firebase backend
2: React frontend
3: Stripe API integration
Our backend flask server is responsible for web scraping and generating semantic, usable JSON code for each product item, which is passed through to our React frontend.
Our React frontend acts as the hub for tangible user-product interactions. Users are given the option to search for grocery products, add them to a grocery list, generate the cheapest possible list, compare prices between stores, and make a direct payment for their groceries through the Stripe API.
## How we built it
We started our product development process with brainstorming various topics we would be interested in working on. Once we decided to proceed with our payment service application. We drew up designs as well as prototyped using Figma, then proceeded to implement the front end designs with React. Our backend uses Flask to handle Stripe API requests as well as web scraping. We also used Firebase to handle user authentication and storage of user data.
## Challenges we ran into
Once we had finished coming up with our problem scope, one of the first challenges we ran into was finding a reliable way to obtain grocery store information. There are no readily available APIs to access price data for grocery stores so we decided to do our own web scraping. This lead to complications with slower server response since some grocery stores have dynamically generated websites, causing some query results to be slower than desired. Due to the limited price availability of some grocery stores, we decided to pivot our focus towards e-commerce and online grocery vendors, which would allow us to flesh out our end-to-end workflow.
## Accomplishments that we're proud of
Some of the websites we had to scrape had lots of information to comb through and we are proud of how we could pick up new skills in Beautiful Soup and Selenium to automate that process! We are also proud of completing the ideation process with an application that included even more features than our original designs. Also, we were scrambling at the end to finish integrating the Stripe API, but it feels incredibly rewarding to be able to utilize real money with our app.
## What we learned
We picked up skills such as web scraping to automate the process of parsing through large data sets. Web scraping dynamically generated websites can also lead to slow server response times that are generally undesirable. It also became apparent to us that we should have set up virtual environments for flask applications so that team members do not have to reinstall every dependency. Last but not least, deciding to integrate a new API at 3am will make you want to pull out your hair, but at least we now know that it can be done :’)
## What's next for G.e.o.r.g.e.
Our next steps with G.e.o.r.g.e. would be to improve the overall user experience of the application by standardizing our UI components and UX workflows with Ecommerce industry standards. In the future, our goal is to work directly with more vendors to gain quicker access to price data, as well as creating more seamless payment solutions.
|
partial
|
## Inspiration
Natural disasters happen all the time and when there’s no power or communication, it’s hard to get help. This is what inspired our Respond application, the need to find and inform about people who need help. Coming from Houston’s recent hurricane Harvey, and then the ongoing Hurricane Florence, one of our team members knows what natural disasters are like. We designed Respond to meet everyone's needs in order to better reach people and provide help.
## What it does
Respond helps first responders survey the severity and distribution of those in need of help after a natural disaster. Using drones, Respond can use audio and visual input to 1) rank severities of different patient situations, 2) present this information and audio messages to first responders, and 3) create a variety of options for the first responder to make sure everyone is helped.
## How we built it
We first used the RevSpeech speech to text api to translate audio files into sentences. We then used sentiment analysis to break down these sentences into their principle emotional components: sadness, fear, anger, joy, and disgust, and overall sentiment. We extracted these features from several manually created training examples to train a neural network capable to assigning urgency levels to sentences.
## Challenges we ran into
On the backend, we ran into challenges when creating a quick and accurate model for machine learning. First responders need the information about severity before all else, so they can know which patients to prioritize; thus, speed was a huge priority to us when creating the neural network.
On the design side, we took a lot of time to empathize with first responders and figure out exactly what information they would need and when they would need it. Ideating and creating a way for first responders to take in large sums of detailed information on mobile screens was definitely a challenge, and was one we overcame by placing ourselves in the first responders’ boots.
## What's next for Respond
We hope to "respond" to judge and peer critique and be able to improve the app on future iterations. Respond has the potential to save first responders a lot of time and brain power when facing not only logistical, but also moral dilemmas. Next time an emergency official gets a call about someone who might need help, someone else will always Respond.
|
## Inspiration:
The app was born from the need to respond to global crises like the ongoing wars in Palestine, Ukraine, and Myanmar. Which have made the importance of real-time, location-based threat awareness more critical than ever.
While these conflicts are often headline news, people living far from the conflict zones may lack the immediate understanding of how quickly conditions change on the ground. Our inspiration came from a desire to bridge that gap by leveraging technology to provide a solution that could offer real-time updates about dangerous areas, not just in warzones but in urban centers and conflict-prone regions around the world.
## How we built it:
Our app was developed with scalability and responsiveness in mind, given the complexity of gathering real-time data from diverse sources. For the backend, we used Python to run a Reflex web app, which hosts our API endpoints and powers the data pipeline. Reflex was chosen for its ability to handle asynchronous tasks, crucial for integrating with a MongoDB database that stores a large volume of data gathered from news articles. This architecture allows us to scrape, store, and process incoming data efficiently without compromising performance.
On the frontend, we leveraged React Native to ensure cross-platform compatibility, offering users a seamless experience on both iOS and Android devices. React Native's flexibility allowed us to build a responsive interface where users can interact with the heat map, see threat levels, and access detailed news summaries all within the same app.
We also integrated Meta LLaMA, a hyperbolic transformer model, which processes the textual data we scrape from news articles. The model is designed to analyze and assess the threat level of each news piece, outputting both the geographical coordinates and a risk assessment score. This was a particularly complex part of the development process, as fine-tuning the model to provide reliable, context-aware predictions required significant iteration and testing.
## Challenges we faced:
The most pressing challenge was data scraping, particularly the obstacles put in place by websites that actively work to prevent scraping. Many news websites have anti-scraping measures in place, making it difficult to gather comprehensive data. To address this, we had to get creative with our scraping methods, using dynamic techniques that could mimic human-like browsing to avoid detection.
Another major challenge was iOS integration, particularly in working with location services. iOS tends to have stricter privacy controls, which required us to implement complex authentication mechanisms and permissions handling. Additionally, deploying the backend infrastructure presented challenges in ensuring that it scaled smoothly under heavy data loads, all while maintaining low-latency responses for real-time updates.
We also faced hurdles in speech-to-text functionality, as we aim to make the app more accessible by allowing users to interact with it via voice commands. Integrating accurate, multi-language speech recognition that can handle diverse accents and conditions in real-world environments is a work in progress.
## Accomplishments we're proud of:
Despite these challenges, we successfully built a dynamic heat map that allows users to visually grasp the intensity of threats in different geographical areas. The Meta LLaMA model was another major achievement, enabling us to not only scrape news articles but also analyze and assign a threat level in real time. This means that a user can look at the app, see a particular area highlighted as high risk, and read news reports with data-backed assessments. We've created something that helps people stay informed about their environment in a practical, visually intuitive way.
Moreover, building a fully functional app with both backend and frontend integration, while using cutting-edge machine learning models for threat assessment, is something we're particularly proud of. The app is capable of processing large datasets and serving actionable insights with minimal delays, which is no small feat given the technical complexity involved.
## What we learned:
One of the biggest takeaways from this project was the importance of starting with the fundamentals and building a solid foundation before adding complex features. In the early stages, we focused on getting the core infrastructure right—ensuring the scraping, data pipeline, and database were robust enough to handle scaling before moving on to model integration and feature expansion. This allowed us to pivot more easily when challenges arose, such as working with real-time data or adjusting to API limitations.
We also learned a great deal about the nuances of natural language processing and machine learning, especially when it comes to applying those technologies to dynamic, unstructured news data. It’s one thing to build an AI model that processes text in a controlled environment, but real-world data is messy, often incomplete, and constantly evolving. Understanding how to fine-tune models like Meta LLaMA to give reliable assessments on current events was both challenging and incredibly rewarding.
## What’s next:
Looking ahead, we plan to expand the app’s capabilities further by integrating speech-to-text functionality. This will make the app more accessible, allowing users to dictate queries or receive voice-based updates on emerging threats without having to type or navigate through screens. This feature will be particularly valuable for users who may be on the move or in situations where typing isn’t practical.
We’re also focusing on improving the accuracy and scope of our web scrapers, aiming to gather more diverse data from a broader range of news sources while adhering to ethical guidelines. This includes exploring ways to improve scraping from difficult sites and even partnering with news outlets to gain access to structured data.
Beyond these immediate goals, we see potential in scaling the app to include predictive analytics, using historical data to forecast potential danger zones before they escalate. This would help users not only react to current events but also plan ahead based on emerging patterns in conflict areas. Another exciting direction is user-driven content, allowing people to report and share information about dangerous areas directly through the app, further enriching the data landscape.
|
## Inspiration
In times of disaster, the capacity of rigid networks like cell service and internet dramatically decreases at the same time demand increases as people try to get information and contact loved ones. This can lead to crippled telecom services which can significantly impact first responders in disaster struck areas, especially in dense urban environments where traditional radios don't work well. We wanted to test newer radio and AI/ML technologies to see if we could make a better solution to this problem, which led to this project.
## What it does
Device nodes in the field network to each other and to the command node through LoRa to send messages, which helps increase the range and resiliency as more device nodes join. The command & control center is provided with summaries of reports coming from the field, which are visualized on the map.
## How we built it
We built the local devices using Wio Terminals and LoRa modules provided by Seeed Studio; we also integrated magnetometers into the devices to provide a basic sense of direction. Whisper was used for speech-to-text with Prediction Guard for summarization, keyword extraction, and command extraction, and trained a neural network on Intel Developer Cloud to perform binary image classification to distinguish damaged and undamaged buildings.
## Challenges we ran into
The limited RAM and storage of microcontrollers made it more difficult to record audio and run TinyML as we intended. Many modules, especially the LoRa and magnetometer, did not have existing libraries so these needed to be coded as well which added to the complexity of the project.
## Accomplishments that we're proud of:
* We wrote a library so that LoRa modules can communicate with each other across long distances
* We integrated Intel's optimization of AI models to make efficient, effective AI models
* We worked together to create something that works
## What we learned:
* How to prompt AI models
* How to write drivers and libraries from scratch by reading datasheets
* How to use the Wio Terminal and the LoRa module
## What's next for Meshworks - NLP LoRa Mesh Network for Emergency Response
* We will improve the audio quality captured by the Wio Terminal and move edge-processing of the speech-to-text to increase the transmission speed and reduce bandwidth use.
* We will add a high-speed LoRa network to allow for faster communication between first responders in a localized area
* We will integrate the microcontroller and the LoRa modules onto a single board with GPS in order to improve ease of transportation and reliability
|
losing
|
## Inspiration
Remember those Japanese game shows with human Tetris? Giant, moving walls would move towards contestants, and they would have to contort their bodies to fit through the holes in the wall or get knocked into the water! You can check out the format here (Click the photo to get taken to a YouTube video!):
[](https://www.youtube.com/watch?v=6ioiMXKpHxI)
We've been wanting to break into the MLH hardware lab to get our hands on a Leap Motion device for ages, and decided to make a virtual human Tetris for your hands!
## What it does
Player uses our *groundbreaking innovation* (Leap Motion device duct taped to the RBC provided sleep-masks) to attach the leap motion on their forehead. The player must move their hand to match the shape of the wall moving towards them in our unity game. The more your hand overlaps with the wall, the more points you lose. The player with the lowest score wins!
**Hootsuite Social Game Version:**
Player one wears the Leap Motion, and also covers their eyes. Player two must tell them how to move their hand to get through the wall.
## How we built it
The game was built using Unity/C#. The shapes of the wall were created using Blender. We needed to interact with the Leap Motion API to calculate the users score. This was done by calculating the position of all of the bones in the visible hands, and then calculating the amount of overlap with the wall shapes. So a player that only has one finger hit the wall will get a lower penalty than a player that has their entire hand overlap with the wall.
## Challenges we ran into
One of the hardest parts of this hackathon was figuring out how to calculate the score for the user. We had to work together as a team to conceptually figure out how we could calculate the overlap of the hand with the wall object.
Leap motion exposes an engine to support contacts/collisions between the virtual hand objects and other physics objects within Unity, but we had the additional challenge of computing "severity" of overlap. A player with 50% of their hand overlapping the incoming Tetris shape should receive a worse score than one who just has an overlapping pinky. We ended up calculating this severity based on "number of contacting digits", as the Leap API exposes the collison state of each finger/palm segment on the hand rig. In order to measure overlap continuously, we didn't use the visual object, but rather a virtual collision object that extends all the way to the user's hands at all time. We switch this collision objects as new Tetris pieces come into focus.
Another challenge we had came from more of a "game design" perspective, where we had to carefully tune the scale and speed of the Tetris pieces, as well as the positioning and field of view of the game camera to give the perception of depth to the oncoming pieces and offer a comfortable level of challenge.
## Accomplishments that we're proud of
We are really happy with how this app turned out! None of us have any prior experience with the Leap Motion, and 2/3 of us have never done any game development or have any experience with Unity. Getting the scoring working was a close call, so we are glad we managed to finish that in time!
## What we learned
We learned lots about game development, and the possibilities and limitations of working with the Leap Motion device.
## What's next for Leap Tetris
We'd be excited to try to make a version of the Fruit Ninja game using the Leap Motion!
|
## Inspiration
We wanted to build a shooter that many friends could play together. We didn't want to settle for something that was just functional, so we added the craziest game mechanic we could think of to maximize the number of problems we would run into: a map that has no up or down, only forward. The aesthetic of the game is based on Minecraft (a game I admit I have never played).
## What it does
The game can host up to 5 players on a local network. Using the keyboard and the mouse on your computer, you can walk around an environment shaped like a giant cube covered in forest, and shoot bolts of energy at your friends. When you reach the threshold of the next plane of the cube, a simple command re-orients your character such that your gravity vector is perpendicular to the next plane, and you can move onwards. The last player standing wins.
## How we built it
First we spent a few (many) hours learning the skills necessary. My teammate familiarized themself with a plethora of Unity functions in order to code the game mechanics we wanted. I'm a pretty decent 3D modeler, but I've never used Maya before and I've never animated a bipedal character. I spent a long while adjusting myself to Maya, and learning how the Mecanim animation system of Unity functions. Once we had the basics, we started working on respective elements: my teammate the gravity transitions and the networking, and myself the character model and animations. Later we combined our work and built up the 3D environment and kept adding features and debugging until the game was playable.
## Challenges we ran into
The gravity transitions where especially challenging. Among a panoply of other bugs that individually took hours to work through or around, the gravity transitions where not fully functional until more than a day into the project. We took a break from work and brainstormed, and we came up with a simpler code structure to make the transition work. We were delighted when we walked all up and around the inside of our cube-map for the first time without our character flailing and falling wildly.
## Accomplishments that we're proud of
Besides the motion capture for the animations and the textures for the model, we built a fully functional, multiplayer shooter with a complex, one-of-a-kind gameplay mechanic. It took 36 hours, and we are proud of going from start to finish without giving up.
## What we learned
Besides the myriad of new skills we picked up, we learned how valuable a hackathon can be. It is an educational experience nothing like a classroom. Nobody chooses what we are going to learn; we choose what we want to learn by chasing what we want to accomplish. By chasing something ambitious, we inevitably run into problems that force us to develop new methodologies and techniques. We realized that a hackathon is special because it's a constant cycle of progress, obstacles, learning, and progress. Progress stacks asymptotically towards a goal until time is up and it's time to show our stuff.
## What's next for Gravity First
The next feature we are dying to add is randomized terrain. We built the environment using prefabricated components that I built in Maya, which we arranged in what we thought was an interesting and challenging arrangement for gameplay. Next, we want every game to have a different, unpredictable six-sided map by randomly laying out the pre-fabs according to certain parameters..
|
## Inspiration
My inspiration for creating CityBlitz was getting lost in Ottawa TWO SEPARATE TIMES on Friday. Since it was my first time in the city, I honestly didn't know how to use the O-Train or even whether Ottawa had buses in operation or not. I realized that if there existed an engaging game that could map hotspots in Ottawa and ways to get to them, I probably wouldn't have had such a hard time navigating on Friday. Plus, I wanted to actively contribute to sustainability, hence the trophies for climate charities pledge.
## What it does
CityBlitz is a top-down pixelated roleplay game that leads players on a journey through Ottawa, Canada. It encourages players to use critical thinking skills to solve problems and to familiarize themselves with navigation in a big city, all while using in-game rewards to make a positive difference in sustainability.
## How I built it
* Entirely coded using Javax swing
* All 250+ graphics assets are hand-drawn using Adobe Photoshop
* All original artwork
* In-game map layouts copy real-life street layouts
* Buildings like the parliament and the O-Train station are mimicked from real-life
* Elements like taxis and street signs also mimic those of Ottawa
## Challenges I ran into
Finding the right balance between a puzzle RPG being too difficult/unintuitive for players vs. spoonfeeding the players every solution was the hardest part of this project. This was overcome through trial and error as well as peer testing and feedback.
## Accomplishments that we're proud of
Over 250 original graphics, a fully functioning RPG, a sustainability feature, and overall gameplay.
## What I learned
I learned how to implement real-world elements like street layouts and transit systems into a game for users to familiarize themselves with the city in question. I also learned how to use GitHub and DevPost, how to create a repository, update git files, create a demo video, participate in a hackathon challenge, submit a hackathon project, and pitch a hackathon project.
## What's next for CityBlitz
Though Ottawa was the original map for CityBlitz, the game aims to create versions/maps centering around other major metropolitan areas like Toronto, New York City, Barcelona, Shanghai, and Mexico City.
In the future, CityBlitz aims to partner with these municipal governments to be publicly implemented in schools for kids to engage with, around the city for users to discover, and to be displayed on tourism platforms to attract people to the city in question.
|
losing
|
## Inspiration
Recently, security has come to the forefront of media with the events surrounding Equifax. We took that fear and distrust and decided to make something to secure and protect data such that only those who should have access to it actually do.
## What it does
Our product encrypts QR codes such that, if scanned by someone who is not authorized to see them, they present an incomprehensible amalgamation of symbols. However, if scanned by someone with proper authority, they reveal the encrypted message inside.
## How we built it
This was built using cloud functions and Firebase as our back end and a Native-react front end. The encryption algorithm was RSA and the QR scanning was open sourced.
## Challenges we ran into
One major challenge we ran into was writing the back end cloud functions. Despite how easy and intuitive Google has tried to make it, it still took a lot of man hours of effort to get it operating the way we wanted it to. Additionally, making react-native compile and run on our computers was a huge challenge as every step of the way it seemed to want to fight us.
## Accomplishments that we're proud of
We're really proud of introducing encryption and security into this previously untapped market. Nobody to our kowledge has tried to encrypt QR codes before and being able to segment the data in this way is sure to change the way we look at QR.
## What we learned
We learned a lot about Firebase. Before this hackathon, only one of us had any experience with Firebase and even that was minimal, however, by the end of this hackathon, all the members had some experience with Firebase and appreciate it a lot more for the technology that it is. A similar story can be said about react-native as that was another piece of technology that only a couple of us really knew how to use. Getting both of these technologies off the ground and making them work together, while not a gargantuan task, was certainly worthy of a project in and of itself let alone rolling cryptography into the mix.
## What's next for SeQR Scanner and Generator
Next, if this gets some traction, is to try and sell this product on th marketplace. Particularly for corporations with, say, QR codes used for labelling boxes in a warehouse, such a technology would be really useful to prevent people from gainng unnecessary and possibly debiliatory information.
|
## Inspiration
WristPass was inspired by the fact that NFC is usually only authenticated using fingerprints. If your fingerprint is compromised, there is nothing you can do to change your fingerprint. We wanted to build a similarly intuitive technology that would allow users to change their unique ids at the push of a button. We envisioned it to be simple and not require many extra accessories which is exactly what we created.
## What it does
WristPass is a wearable Electro-Biometric transmission device and companion app for secure and reconfigurable personal identification with our universal receivers. Make purchases with a single touch. Check into events without worrying about forgetting tickets. Unlock doors by simply touching the handle.
## How we built it
WristPass was built using several different means of creation due to there being multiple parts to the projects. The WristPass itself was fabricated using various electronic components. The companion app uses Swift to transmit and display data to and from your device. The app also plugs into our back end to grab user data and information. Finally our receiving plates are able to handle the data in any way they want after the correct signal has been decoded. From here we demoed the unlocking of a door, a check in at a concert, and paying for a meal at your local subway shop.
## Challenges we ran into
By far the largest challenge we ran into was properly receiving and transcoding the user’s encoded information. We could reliably transmit data from our device using an alternating current, but it became a much larger ordeal when we had to reliably detect these incoming signals and process the information stored within. In the end we were able to both send and receive information.
## Accomplishments that we're proud of
1. Actually being able to transmit data using an alternating current
2. Building a successful coupling capacitor
3. The vast application of the product and how it can be expanded to so many different endpoints
## What we learned
1. We learned how to do capacitive coupling and decode signals transmitted from it.
2. We learned how to create a RESTful API using MongoDB, Spring and a Linode Instance.
3. We became more familiarized with new APIs including: Nexmo, Lyft, Capital One’s Nessie.
4. And a LOT of physics!
## What's next for WristPass
1. We plan on improving security of the device.
2. We plan to integrate Bluetooth in our serial communications to pair it with our companion iOS app.
3. Develop for android and create a web UI.
4. Partner with various companies to create an electro-biometric device ecosystem.
|
# QaRd
#### (pronounced like "QR'd", or just "card", we aren't picky!)

## Inspiration
Everything is being digitized nowadays, yet people still hand out paper business cards. It is also a hassle to manually add new contacts to your phone, or to search up new people you meet on Facebook, Instagram, Twitter, etc. We wanted to make an easy way for people to connect by simply pulling out their smartphones.

## What it does
**Quickly view other people's QaRds:**
Just open the app, use the built in QR code scanner, and scan another person's QaRd. It will automatically show all of their QaRd information so you can quickly connect with them.
**Create your own QaRds:**
Each person can create many QaRds for different purposes. You might want to have different QaRds for different purposes.
For example:
* gamer tags for gaming platforms like Steam, Blizzard, etc.
* social media for when you want to add new people you meet
* or simply your name, email address, and phone number
As soon as you scan another person's QaRd, all of the information will pop up on your screen and you can easily access them.
## How we built it
At its core, it is an iOS app built with Swift as the front-end.
The QR code image was generated using an online API.
Standard Library was used to create and deploy the API. All of the API calls made by the app go through an endpoint that was coded up in JavaScript.
To store all of the QaRds, we used Google's Firebase and Firestore to store users and their QaRd information, in the form of JSON objects.
## What we learned
None of us have worked with QR codes before. so we learned a lot about how they can be used, their limitations, and how they can be generated. We also learned a lot from using Standard Library and Firebase Firestore.
## What's next for QaRd
**Saving other QaRds:**
We want to be able to store a person's QaRd and save it as soon we scan it so they can be revisited.
**Authentication:**
Right now, we don't have a way to authenticate users and allow or disallow people into other people's database
documents. To prevent malicious activity, we would need to authenticate people in the back end.
**Have the QR codes expire:**
For security, ideally, the QR codes should be regenerated every once in a while so old ones can't be spread around.
**Functionality:**
It should be hooked up to different social media apps such as Facebook, Instagram, and Twitter, using their APIs so
that we can connect with them in a single click
|
partial
|
## Inspiration
COVID-19 has hurt many parts of society and seriously impacted the way we go to school and learn. As a French Immersion student who grew up in an incredibly Anglophone community, I was upset to learn that many French Immersion students in my hometown have been forced to drop out since the program is not available online there.
The idea behind the app is to offer a way to practice some French skills from home or remotely, similarly to the drills we did in my French classes growing up.
## What it does
This app is a game to help students learn French verb conjugations. Users can select one of four verb tenses (Présent, Passé composé, Imparfait, and Futur) and a chosen time limit (1, 2, or 5 minutes).
During the game, the app provides verbs and pronouns to use for conjugation (for example, conjugate "demander" for "il"). The user must input the correct answer to earn points. My idea is that this could occur in a group setting, with the student that gets the most points in a time period winning.
The game also tracks users' performance on each tense locally. Their results are viewable on a stats page that allows a user to see their performance change over time as they develop each tense.
## How I built it
The game is developed on Android Studio in Java. It uses a series of .CSV files I prepared for each verb tense that include the verb and its conjugations for each pronoun, pulling verbs and choosing pronouns pseudo-randomly from the files.
Testing of the app was performed on two emulators (a Google Pixel 2 phone and Pixel C tablet) run through Android Studio.
## Challenges I ran into
There were a variety of challenges throughout the project. For me, the first issue to overcome was creating a good idea. I was most interested in addressing the COVID-19 Education challenge, but in the weeks prior to the hackathon I had a difficult time coming up with exactly what I wanted. I only came up with this idea a few hours before the hackathon started, which may have set me back compared to other teams that planned far in advance.
The first roadblock was crashing issues when selecting the verb from the CSV file. A built-in function I was going to use was constantly returning an empty string array, rather than correctly splitting each row. I ended up having to create my own CSV reader class that was able to fix this issue.
Another issue was with the timer. When a user would leave the game (i.e return to the main screen), the timer would continue to run and the puzzle would remain active until they returned. I learned about the lifecycle of an Activity, and was able to override this behaviour with the onPause() and onResume() methods that I had never really used before.
## Accomplishments that I'm proud of
This is my first Hackathon, so I'm very proud that I was able to come up with and create an idea, then fully implement it. I'm normally not a super creative person, but thinking about the challenges that face French Immersion students in COVID-19 helped me to create this idea that I was passionate about.
I'm very proud with the result, especially considering it took about 24 hours to produce. With further refinement, I'm sure this app can continue to improve and be something useful for some people.
## What I learned
I learned more about mobile development for Android, such as how to use a Handler to run commands in the future and to enqueue actions to be performed on other threads. This helps to make mobile applications more versatile than they normally would be. This could be used to introduce a timeout ability, for example.
I learned about the Android OS as well, and how we can store data in the system. I used an agile method for storing user's attempts and correct answers that would make it very easy to expand the game in the future.
## What's next for Con-jeu-guez!
The game can continue to be expanded by adding new verb tenses and verbs. The user experience could also be improved by adding sound effects, such as when a user inputs a correct/wrong answer. I think some Kahoot music would also be a nice touch...
It may also be a good idea to replace the SharedPreferences method of storing data to using Firebase, that way the data could still be stored if a user uninstalls and reinstalls the program.
I currently intend on releasing this to Google Play in the near future and can then share it with my network, some of whom are studying to become French teachers.
|
## Inspiration
We were inspired by our lack of engagement with certain school activities, and we saw an opportunity to embrace the use of phones in class for educational purposes.
## What it does
We offer a platform for anyone to make a scavenger hunt and then host it. By entering a game code and a username on a phone app, you are able to take place in a scavenger-hunt. You are given a clue to the next location, and once you get within a certain radius of the location, you will be given the next clue. Your time is your score, and the first to finish is the winner.
## How I built it
We constructed the app using Android Studio in Java, utilizing the Google Maps SDK for Android API to determine the user's location. and generate a map. We also used a customized JSON file, which we used to customize the map graphics. In order to store game data, we implemented Google Firebase and Google Cloud in our project. In a further iteration of our app, we would also use this for photo recognition.
## Challenges I ran into
This was our first time creating an Android app, and our first time using many of the APIs. Although. we had all previously used Java before, we had started off in Kotlin, a language we were less familiar in, and then made the switch partway during the hackathon.
## Accomplishments that I'm proud of
Finishing a working model of the application with a functioning UI, databasing, and processing within 36 hours.
## What I learned
We learned how to use various APIs, how to build Android apps, and React from workshops we attended.
## What's next for go!
A nicely formatted teacher's app to be able to create and host scavenger hunts would be something we would spend time to flesh out, along with new modes for guided museum tours and a photo hunt using an AI database. We would also add support for Apple devices and tablets.
|
# Relive and Relearn
*Step foot into a **living photo album** – a window into your memories of your time spent in Paris.*
## Inspiration
Did you know that 70% of people worldwide are interested in learning a foreign language? However, the most effective learning method, immersion and practice, is often challenging for those hesitant to speak with locals or unable to find the right environment. We sought out to try and solve this problem by – even for experiences you yourself may not have lived; While practicing your language skills and getting personalized feedback, enjoy the ability to interact and immerse yourself in a new world!
## What it does
Vitre allows you to interact with a photo album containing someone else’s memories of their life! We allow you to communicate and interact with characters around you in those memories as if they were your own. At the end, we provide tailored feedback and an AI backed DELF (Diplôme d'Études en Langue Française) assessment to quantify your French capabilities. Finally, it allows for the user to make learning languages fun and effective; where users are encouraged to learn through nostalgia.
## How we built it
We built all of it on Unity, using C#. We leveraged external API’s to make the project happen.
When the user starts speaking, we used ChatGPT’s Whisper API to transform speech into text.
Then, we fed that text into co:here, with custom prompts so that it could role play and respond in character.
Meanwhile, we are checking the responses by using co:here rerank to check on the progress of the conversation, so we knew when to move on from the memory.
We store all of the conversation so that we can later use co:here classify to give the player feedback on their grammar, and give them a level on their french.
Then, using Eleven Labs, we converted co:here’s text to speech and played it for the player to simulate a real conversation.
## Challenges we ran into
VR IS TOUGH – but incredibly rewarding! None of our team knew how to use Unity VR and the learning curve sure was steep. C# was also a tricky language to get our heads around but we pulled through! Given that our game is multilingual, we ran into challenges when it came to using LLMs but we were able to use and prompt engineering to generate suitable responses in our target language.
## Accomplishments that we're proud of
Figuring out how to build and deploy on Oculus Quest 2 from Unity
Getting over that steep VR learning curve – our first time ever developing in three dimensions
Designing a pipeline between several APIs to achieve desired functionality
Developing functional environments and UI for VR
## What we learned
* 👾 An unfathomable amount of **Unity & C#** game development fundamentals – from nothing!
* 🧠 Implementing and working with **Cohere** models – rerank, chat & classify
* ☎️ C# HTTP requests in a **Unity VR** environment
* 🗣️ **OpenAI Whisper** for multilingual speech-to-text, and **ElevenLabs** for text-to-speech
* 🇫🇷🇨🇦 A lot of **French**. Our accents got noticeably better over the hours of testing.
## What's next for Vitre
* More language support
* More scenes for the existing language
* Real time grammar correction
* Pronunciation ranking and rating
* Change memories to different voices
## Credits
We took inspiration from the indie game “Before Your Eyes”, we are big fans!
|
losing
|
## Inspiration
Most chatbots are boring. Often, they feel alien and inhuman. Our team wanted to explore to see explore how AI and sentiment analysis to improve this experience.
## What it does
This chatbot uses sentiment analysis to get the gist of the users inputs and replies accordingly
## How we built it
We used React.JS to build the backend and HTML and CSS to create an aesthetic
## Challenges we ran into
We ran into plenty of challenges. Initially we wanted to create our own model in Tensorflow however we switched to using Monkey Learn and React Chat Bot Kit API's.
## Accomplishments that we're proud of
We are proud of the friends we made along the way and completing a project we are proud of! :)
## What we learned
We learned that anything can be achieved with teamwork and perseverance :) Also that back up plans are Important
## What's next for Fren-On-A-Phone
We hope that future chatbots become much more user friendly and apply sentiment analysis to interact with users in a friendlier way.
|
# Summary
**Just two weeks ago, two devastating earthquakes struck Turkey and Syria, leaving 46,000 dead and many more lost under the remains of buildings.**
**Emergency response efforts were significantly impeded by inefficient recovery protocols to screen the large area of land affected, depleting the chances of survival for those still trapped under buildings.**
**This urgent crisis demands an effective solution. We present Aziz, a technological device that screens for humans still trapped under debris, empowers rescue workers on their mission to save lives, and revolutionizes our preparedness for future crises.**
# Inspiration
On February 6th, 2023, two 7.8 and 7.5 magnitude Earthquakes struck Turkey and Syria, causing the collapse of over 6,000 buildings. Given that these earthquakes took place in the middle of the night, very few had time to escape, trapping many underground without a route to safety.
Since that night, many rescue workers have been mobilized to dig through the rubble in search of remaining victims. However, these rescue efforts have been extremely time-consuming and labor-intensive, which have compounded the consequences for those still trapped underground, waiting with waning hope and praying that someone will hear their feeble voices.
One of our team members has lost both family and friends living in Adana, Turkey, to these earthquakes. Those who have survived from her home city have been forced to evacuate to other parts of the country, causing a prosperous region to convert to what resembled a waste zone overnight. Our solution Aziz was motivated by the devastating state that Turkey and Syria are now in and aims to respond to the harsh lessons learned from this incident.
One of the major factors that impeded an effective response to earthquake recovery was the inefficient and demanding process that was needed in order to identify bodies amongst the rubble. With over 6,000 buildings collapsed and each one requiring either an overwhelming number of hours or highly advanced construction equipment to dig through, there was no feasible method to rescue victims in time. As a result, groups of volunteers resorted to waiting in silence, listening for a voice to emerge from under the rubble, and upon hearing a voice digging with construction tools, pieces of debris, or in some cases, just their hands. This process was repeatedly followed until potential victims could be recovered, alive or not.
As rescue workers persisted day and night with this labor-intensive process, victims still trapped under buildings were forced to wait in agony as they lost strength and hope that someone would come to the rescue. Even at the present moment, rescue workers continue to dig through remnants of buildings, with many bodies discovered daily. This crisis has blatantly exposed many technological and ethical limitations that pushed this situation to be even worse than it already was.
Aziz seeks to provide the technological capacity needed for rescue workers to answer the hopes and prayers of trapped victims wishing to see sunlight once again. The name “Aziz'' itself is a Turkish word that means “dear,” representing the care that unites communities in times of emergency. Our platform complements the care that is shared between communities by equipping them with robust technology to respond to times of crisis and to identify humans still trapped underground.
Still, the devastating situation in Turkey is not a one-time event. Natural disasters and international crises are bound to occur yet again, each time testing our preparedness and capacity to respond. Through Aziz, we lay a technological foundation to redefine how crises are dealt with and bring equity, security, and safety as we let our endearing care unite us closer, especially during emergencies.
# What Aziz does
Aziz is an all-in-one, comprehensive monitoring system that integrates various metrics to assess signs of human life from beneath rubble, allowing rescue workers to find victims trapped underground. First, our sensing technology contains a sensitive voice detection system to identify voices originating from under the rubble that may be difficult for the human ear to hear. Furthermore, our system contains a carbon dioxide monitor, of which elevated levels measured through ppm-range changes in atmospheric CO2 composition indicate respiration. Our platform also integrates an altitude sensor for gauging the depth of potential victims with respect to sea level to a high degree of accuracy. Finally, our system harnesses user-controlled ultrasonic sensors that scan across areas and provides real-time information on the 3D landscape of a rescue worker’s surroundings, even in complete darkness. Ultimately, these four capabilities work in complement with one another to support rescue teams as they try to identify signs of life amongst difficult terrain and in dangerous environments.
While Aziz provides the technological foundation for a robust sensing system with responsive and live data on metrics including audio, respiration levels, altitude, and surrounding landscape, this sensor system also carries societal implications that can be realized during rescue missions. Most simply, Aziz can be used as a compact and portable device that can be attached to and controlled by rescue workers digging through rubble. For example, if a rescue worker comes across a small opening too small for them to fit through while crawling through tunnels under rubble, they may use this device to survey the area that they are not able to reach and decide whether to initiate efforts in that direction. Alternatively, Aziz, which is lightweight and small in size, can be integrated with drones for rapid scanning over large areas. Aziz could be assembled onto drones and first used to scan over buildings and then survey inside buildings using an additional LED to light up its surroundings and provide stability in its flying movement. Finally, Aziz could be coupled with small robots that can be fished inside small openings and retrieved from deep underground. While the possibilities for implementation of Aziz are broad and well-defined, even on its own, this technology is a powerful and impactful device critical to rescue missions.
However, beyond just the technological level, Aziz works on a societal level to provide an effective solution that empowers communities under crisis, especially those with low access to advanced technology or financial flexibility. Aziz is particularly impactful in developing or underprivileged regions as it has low-resource and low-cost capabilities. We designed this platform to operate without WiFi, which alleviates a significant burden in developing areas and immensely widens its potential for impact. Moreover, Aziz is very inexpensive as it is composed of standard, low-cost parts assembled onto a 3D printed scaffold, all of which sum to a small price that will ensure that all have access to this critical tool. With its technologically robust and societally impactful capabilities, Aziz provides an effective solution to disaster preparedness, which is a significant concern that carries consequences beyond the situation in Turkey and that will continue to impact future generations.
# How we built Aziz
Aziz was built using an Arduino microcontroller and complementary modules. The main part of the circuit is the Arduino MKR WiFi 1010, which is connected with an Arduino MKR IoT equipped with several built-in sensors, such as temperature, humidity, barometric pressure, gas (air quality, VOC), ambient light, and gyroscope. This combination provides the device with sufficient computational power and access to many useful sensors all while maintaining a compact build. Initially, the team planned to implement the preliminary detection via drone using the RCWL-0516 microwave radar module for Arduino; however, the hardware limitation made us use Ultrasonic Sensor HC-SR04 as a complementary device that can serve as a temporary alternative. The 3D printed scaffold holds all the components together by the attached SG90 Servo Motor for holding and directing Ultrasonic Sensor HC-SR04 from one side and the joystick from another side. The electrical circuit was soldered to the common board and connected to the computer to upload code and test using the Arduino IDE on C++.
# Challenges we ran into
**Technological limitations:** The first challenge we faced was the limited variety of sensors and other hardware that could be used to generate inputs for screening for signs of life. After reading into the literature, we decided that RCWL-0516 microwave radar, which can sense heartbeat and heart rate through walls, would be most suited to our needs, but were unable to obtain this. Hence, we chose the next best alternative, an ultrasonic sensor, which still provided similar insight into spatial organization in the dark. Nonetheless, in the future, it would be possible to implement alternatives like RCWL-0516 microwave radars at any point without dramatic impact on the weight of the device.
**Processing external metrics for detecting the likelihood of life:** While we were able to employ various data-collecting sensors to gather external information on abiotic factors, the process of converting these abiotic metrics to biotic predictions was challenging. We particularly struggled while trying to determine a threshold for carbon dioxide ppm concentration that was indicative of respiration. We had to do extensive reading of scientific literature to understand carbon dioxide levels across various terrains before deciding upon a threshold value that separated carbon dioxide levels in outdoors places without human inhabitants from carbon dioxide levels of indoor and outdoor human-inhabited places.
**Practical limitations:** In order to keep our solution low-cost, low-resource, and hence widely accessible, we placed additional constraints on ourselves during the design process to ensure the final product would meet these visions. For example, in order to ensure that our system could work without a WiFi connection, we decided to take a hardware approach where all code could be loaded onto a microcontroller and used without compromised impact in remote regions, rather than developing a WiFi-reliant website.
# Accomplishments that we're proud of
**An interdisciplinary approach to designing a technological solution:** In order to develop our final product, we drew from software coding skills in C++, 3D modeling skills in Fusion 360, societal knowledge of the state of Turkey and Syria’s crisis, and more. We are happy to see how we were able to draw from different skill sets to develop a cohesive solution that is a product of interdisciplinary collaboration. Our multidisciplinary approach allowed us to come up with a more comprehensive solution that embeds essential knowledge from different fields, and we’re happy to see how these all came together in the end to support a more robust final product.
**Going from strangers to a tightly knit team:** Prior to coming to TreeHacks, none of us had met in person and we were barely familiar with one another. However, after 36 hours of hacking together, we have formed a closely knit, collaborative team and feel very close with one another. Without knowing whether we’d get along, we were open to each others’ ideas and willing to take risks, allowing us to foster effective collaboration both when our ideas agreed and when our ideas differed.
**Integrating interests and strengths:** We are proud that our final product is a mosaic of everyone’s interests and strengths. While we integrated Sam’s strengths in 3D modeling and Dilnaz’s interests developing biomedical models from abiotic data, we coupled these with Selin’s interest in designing tools for identifying earthquake victims trapped under rubble. When we look at our final design, we see a reflection of our own ideas and visions as well as those of our teammates’, each time being able to pinpoint how ideas were proposed and how they developed through collaboration to become a part of this final mosaic.
# What we learned
Throughout this weekend, we learned how to overcome challenges by optimizing our product design path. As we faced technological limitations, we creatively brainstormed suitable alternatives that would allow us to preserve the initial project vision but reach that vision through an alternate path, whether it be replacing radio wave sensors with an ultrasound sensor to establish a proof-of-concept model or 3D-printing a scaffold to hold the various electronic components together rather than leave them connected by flimsy wires. Additionally, even when we achieved our general vision, we still performed iterations of testing to find alternate approaches that potentially worked even better. For example, as we were implementing our ultrasonic sensor to detect distances and outline the surrounding landscape, we initially implemented auditory signals whose frequency increased as the sensor approached the nearest object. Even though this achieved our fundamental vision of providing a readout in response to distance from objects, we realized that the mix of auditory signals with visual signals displayed on the user interface created too many senses for the user to focus on, so we decided to just represent distance readouts using the visual user interface.
Additionally, given that we were under an extreme time constraint this weekend, we learned the importance of fully thinking through ideas early on before diving headfirst into the build phase. We learned that 5 minutes of early brainstorming can save 5 hours down the road and that fully fleshing out ideas gives a stronger team vision and paves a clearer development path. We particularly experienced this when deciding on how to implement our product; we realized we could either pursue a drone add-on or a system connected to robotic platforms that would be used like a fish hook in small cracks. Uncertainty on how to approach this decision as we were constructing created some hesitation and we realized that the best course of action at that time was to thoroughly address the decision before moving forward with a half-clear idea in mind. Once we discussed and came to a conclusion, we felt much more confident in development and were able to resume at a quicker pace than before, achieving a more cohesive vision at the end.
# What's next for Aziz
We plan to implement thermal infrared sensors to replace our ultrasonic sensors, which were unable to be obtained due to technological limitations. Infrared imaging will enable us to capture body heat signatures, drawing more precise conclusions on the presence and location of humans. In addition, adding a variety of inputs that will be able to detect the person breathing in a close distance will increase the overall accuracy of prediction. The possible variants are acetone, ammonia, and isoprene that detect metabolic tracers emitted by human breath and skin, all of which have a literary precedent as being used as a metric for the presence of humans.
Another next important step is creating real-world impacts through implementation. We see the future of the project as a device that can be put into rubble underneath from a drone that has its own microwave/thermal IR sensors to detect the possible life signals within an area. Rescuers can use drones for delivering Aziz deep inside the rubble to places where humans can’t reach. This could be achieved by pairing our sensor system with a spherical robot, specifically the polyhex edge skeleton, which has adjustable sides/legs capable of manuevering these electronic components across obstacles inside buildings. The adjustable system of the polyhex design will make sure that Aziz can move inside the rubble and transform into different shapes depending on the environment. The shapes would be determined through pressure on the legs and would allow thorough screening of building remnants prior to rescue worker labor-intensive efforts.
# Ethics
The situation in Turkey and Syria was a very large demonstration of an ethical crisis in that those who lived in more remote regions were unable to receive the life-saving support they needed—including heavy duty construction equipment, search and rescue teams, and medical attention. As a result, depending on the regions in which they lived, certain groups of people were more likely to be rescued in a quick enough time to still be found alive, as opposed to others who were less fortunate.
This ethical crisis creates a need for more equitable emergency recovery protocols that not only provide equal treatment depending on location and status, but that also provide equal chances of being saved instead of equal chances of not being found. Through technological innovation, we can remedy these ethical dilemmas and move toward a more equitable future, although development of these fair technologies will also require more ethical considerations. Our proposed life sensing system has both positive and negative ethical implications, each of which must be thoroughly considered to ensure that the platform reaches its intended goal rather than amplify any unintended consequences.
**Ethical implications of Aziz:**
* At the highest level, Aziz is playing a foundational role in determining whether or not lives are saved. If not advertised or implemented properly, this aspect could create major repercussions for Aziz, especially in the situation where Aziz fails to detect humans and causes search and rescue teams to overlook those victims (type II error). To carefully navigate this ethical concern, we will first be very deliberate during marketing efforts to clearly state that we are purely a data collecting platform and that we make no ardent statements on our ability to save the lives of those trapped under rubble, as this could lead us into consequences where we receive blame for overlooking humans in need of saving. Second, to address this ethical challenge, we will develop our platform to be as objective as possible; we will give explicit data values when possible instead of making any human-influenced subjective statements and will ensure that every indication has a quantitative basis.
* Moreover, our platform needs to be carefully reviewed to prevent biases in function and output, which favors the survival of some victims over others. Although we have a CO2 sensor that is read out as either too high or too low, the question becomes what is this CO2 threshold with respect to? What part of the globe? How applicable is it to other parts? Given that these thresholds will vary from region to region, we need to ensure that we either use thresholds that are inherently bias-free or remove these thresholds and purely reflect quantitative values. One particularly successful way in which our system avoids preferential search and rescue is due to the fact that Aziz doesn’t rely on WiFi. Because WiFi access can be highly variable in remote parts of the world, we’ve specifically designed our system to operate WiFi-free and in a wireless manner, ensuring that WiFi availability does not create an ethical issue of unequal accessibility.
* Another ethical risk to consider are the political implications of this device, given tensions in cross-national relationships. Since this product has been designed and developed in the US, when it is implemented abroad, it may imply messages about the US political system or create resentment toward the country. For example, if the device were to miss a person trapped under rubble, this could deflect blame onto the US and heighten political tensions between other countries. Likewise, depending on where this product is implemented, this could also suggest political inclinations of the US and create tension between nations. If this technology were to reach Syria in support of efforts to save the lives of those trapped under rubble, an initiative which happens to currently be spearheaded in rebel regions by the White Helmets, the Syrian government could see this as a threat and further expand its resentment toward the US.
* Because our platform collects data from its surroundings, there could be data privacy concerns and cases of the unintentional collection of sensitive data. The microphone on our system detects audio signals to determine if voices are present, but this could pick up on voices of people who do not consent to it. Since surveying everyone in the region to verify consent of the use of this technology conflicts with the purpose of uncovering hidden victims, we may need to remove certain functionalities due to this or use our platform solely for real-time data monitoring without involving any data storage.
Discussing these ethical considerations has been one of our early steps in ensuring that we do not create unintended ethical implications. The next steps in addressing ethical concerns will involve two aspects: product design and marketing. With respect to product design, although we already have features implemented to bolster the ethical side of our device, there are further revisions we can make. For example, we could replace the carbon dioxide sensor output from “high” or “low” levels to be a spectrum of values or to provide simply the quantitative value of carbon dioxide content in the atmosphere. Second, when it comes to marketing our platform, we need to cleverly develop a marketing strategy that doesn’t overpromise its benefits to users. Rather than having the slogan of rescuing the lives of those trapped under collapsed buildings, we will focus on objective data collection-oriented capabilities of this comprehensive and widely accessible technology.
Ultimately, through Aziz, we hope to provide fair and equitable technological solutions that promote wellbeing rather than compound ethical consequences. Aziz has a powerful potential to embolden underrepresented populations and provide critical technologies that will be widely accessible during emergencies. Achieving this vision will require careful planning, clever design, and strategic marketing at each step of the way to ensure that this platform can reach its full potential for impact.
# Bibliography
[1] <https://www.technology.org/2018/04/23/portable-device-to-aid-rescue-workers-in-searching-for-humans-trapped-under-rubble/>
[2] <https://spinoff.nasa.gov/FINDER-Finds-Its-Way-into-Rescuers-Toolkits>
[3] <https://www.jpl.nasa.gov/videos/finder-radar-for-locating-disaster-victims>
[4] <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1455483/>
[5] <https://www.mdpi.com/2218-6581/1/1/3>
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
losing
|
## Inspiration
We wanted to find an application for the Watson ML APIs that was both relatively novel and relatively adaptable. We initially tried to use a series of APIs in order to assess relations between keywords, entities, and sentiments to determine bias in news media, but quickly determined that due to the length and diversity of news articles, as well as a lack of a way to sort articles based on 'quality,' it was infeasible. However, we did determine that it would be much more feasible and applicable to assess social media comments, which already have easily accessible metrics for perceived quality, (upvotes/likes/retweets), short strings, which would allow for a much better way to both produce a working model of sentimental analysis and make useful sense of the data that we produced.
We ended up choosing to select posts from reddit, and analyze what keywords and entities they encompassed, incorporating upvote data, top comments, and body text. In doing so, we created a system that produced data on trending topics in a selective basis, picking from one or a combination of subreddits, which allows for granular topic control, providing sets of similar data points (posts), that also had scope for application on other text-heavy social media websites.
## What it does
Baker Street Analytica is a prototype framework for in-depth analysis of subreddit trends and opinions. We created a GUI for selecting data sets [subreddit(s), number of posts collected, number of comments assessed], we produced a user-friendly graphical output that can be easily interpreted to understand the collective consciousness of a subforum. This framework, while currently plugged into the PRAW (Python Reddit API Wrapper) tool for Reddit, can easily be reworked to take input from Twitter and/or other forum style websites.
## How we built it
We started by tinkering with Watson's Natural Language Understanding API, to try and figure out how it interpreted text and divided it into keywords and entities. From there on, we decided on a website to gather our data from, and ended up picking reddit due to its numerous and specific sub-forums, which closely group similar posts and its python API, which allowed us to use one powerful language that became modular due to its ease of understanding and use. From there on, we decided to first output the data we gathered as text, separating keywords/entities, and how positively, and negatively they were perceived. While we gained adequate results, which we kept in as a debugging tool, we still believed that there was a better way to represent them. As such, we went on to first develop a graphical tool to produce a graph of the intensity of the negative and positive sentiments of each keyword and entity. From there, we then decided to improve the user-friendlyness of our script and produce a 'webpage' that allowed for easy input of subreddits, number of posts to be assessed, number of comments, and number of keywords/entities to be chosen.
## Challenges we ran into
After we completed the initial task of implementing a barebones NLU keywords algorithm, we decided to attempt graphical representations of the data. We had some trouble finding a good graph type to represent both positive and negative feedback without convoluting the data. We eventually settled on a bar graph that extended both into the negative as well as the positive end. Then, we struggled to embed the graph into the front-end interface that we were developing. We started by using a Dash implementation, which run a new server and opened a new page for each new graph. We eventually had to switch to a Plot.ly implementation, which involved rewriting our entire graph implementation to embed our graph to the same page as our form.
## What's next for Baker Street Analytica
Sometimes it becomes quite hard to interpret the data outputs, particularly when they return non-sequitur keywords and entities. Certainly, improvements to the accuracy and contextualization of results would vastly increase the utility of this application.
|
## Initial Idea
Our initial idea was to gather headlines from different news sources and to display the keywords of each one. That idea was then morphed into gathering a series of headlines from the WorldNews subreddit. Although this seemingly lacks in features, we were able to extend that idea to include other public subreddits. We were eventually able to assemble the top ten most recent upvoted posts of any given subreddits.
## Methodology
With the gathered data, we then utilized Google Cloud's Natural Language API to determine the sentimental score and magnitude of each given article. The score indicates the "emotion" level of the entire headline, positive values being indicative of more positive emotions while negative values are indicative of negative emotions. This score is then displayed with a short explanation.
## Significance
This algorithm, despite its simplicity, reveals a lot of information; from displaying the top headline of any given genre of the day to revealing the nature within such genre. Thus, by extension the everyone that partakes in said thread. To identify and then impose a "sentimental" and "emotional" score to each individual group, we are able to demonstrate how people act differently in different subreddits.
While political subreddit(s) and some news subreddit are predictably more negative regarding the "sentimental score". In addition, it is shown that subreddits that encourage critical thinking tends to be more positive on the same scale.
For example subreddits such as WritingPrompts scores highly compared to others. Furthermore while running through further iterations with different subreddits we also find that some threads fluctuate quite a bit. Because new posts can be written some of the larger subreddits due to our smaller sample size of 10 top posts may not reflect the newer posts as quickly.
Perhaps such emotional tests offer us an alternative perspective to life that is previously ignored.
## How we built it
We've built the front end and the back end separately. We first had to build a web scraper that is capable of identifying the headline, the date posted, and the number of upvotes each (in addition to other data points) Reddit post has in each subreddit. Then we had to build the main framework with flask so that the front end can actually communicate with the backend and essentially make a modification to the code if needed to. We finished off by refining our front end with html5 and css3; essentially turning it into what we have now.
## Challenges we ran into
Since we were all beginners we struggled a lot on how to use what we learned in class and turning it into something practical. One such struggle in particular for some was the conversion of HTML and CSS to static flask files. It took a while to figure out that everything from the front end needs to be converted into static files, even the javascript functions, for the Flask framework to work properly. Another thing we struggled on was to turn what we have locally into an actual web app; in a sense that to our local hosted application and turn it into a google cloud compatible app that can be run on a server.
## Accomplishments that we are proud of
This project is something that I am particularly proud of because it's literally the first idea that came into my head. And within 36 hours we are able to build something that essentially resembles what I had initially imagined and, in many cases, exceed my expectation of it.
## What we learned
We've all furthered our knowledge on HTML and python and realized the importance of teamwork and how crucial it was to the success and the efficiency of the project. In addition, using Google Cloud to help debug our code.
|
## Inspiration
When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?"
## What it does
*News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well!
## How we built it
First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it!
## Challenges we ran into
We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content.
## Accomplishments that we're proud of
We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact.
## What we learned
We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience!
## What's next for News Report
We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.
|
losing
|
## Inspiration
We were inspired by the lack of project management tools on the market.
## What it does
It acts as a project manager inside Discord. It checks up on employees and verifies that they are working on their task and talks using natural language processing.
## How we built it
Two separate components
* Natural language processing is done inside Python using Pytorch trained on our own dataset.
* The Discord bot is done using the Discord API. It talks to users both in private messages and in channels.
## Challenges we ran into
We first tried to use DialogFlow for natural language processing, but we were unsuccessful in customizing the chat behaviour. We, later on, decided to use Python/Pytorch and implement our own model from scratch.
## Accomplishments that we're proud of
The Discord bot is ready to hop in any server of yours!
## What's next for OnTask
OnTask can be furthermore refined by integrating the Github API and acting on more complex real projects.
|
## Inspiration
Inspired from American Sniper
## What it does
Make a guess for where the evil sniper is based on callout hints and snipe the sniper, you have 3 chances
## How I built it
Using Callout animation and some geometry
## Challenges I ran into
1) Lack of knowledge on 3d algorithms to turn the landscape into 3d
2) took a long time to figure out the game concept
## Accomplishments that I'm proud of
Smart Use of call out T resolution image to make the game seem very real and attractive
## What I learned
The correct use of Callouts, drag and drop, loading multiple files in javafx Mediaplayer
## What's next for Snipe the Sniper
Show it to Tangelo and discuss with the expert whether we can make 3d landscape that can be rotated just like
in Google map s
|
## Inspiration
My first programming experience came from Scratch and I learned a lot of Python by making Discord bots. block2discord allows others to learn programming principles while making something useful and tangible. Furthermore, since block2discord translates the block code to Python, it provides a gateway to learn Python.
## What it does
block2discord creates an interface to drag and drop blocks to create a program. The code is then translated to Python which you can modify and run. The more commonly used function, method, and attribute blocks are available.
## How we built it
The editor was built based on Microsoft MakeCode PXT architecture and augmented with Discord blocks. The translator was built in Python using regex, Python code formatters, and other string manipulation techniques.
## Challenges we ran into
Usually when building a block-based language, you would build an API around the blocks. However, since block2discord translates to `discord.py`, and intermediate program is needed to bridge the gap between block-based-Python to `discord.py`.
## Accomplishments that we're proud of
This is the first time I tried to solo a hackathon. I'm proud of the amount of progress and usable code I made.
## What we learned
I've learned a lot about how block based languages are constructed and how they work. I also learned some TypeScript syntax needed to make the blocks. Working with friends in a team is better :D.
## What's next for block2discord
More blocks! The `discord.py` library is massive and it would be impossible to convert the entire library in 36 hours. Furthermore, the translator engine should be modified to understand code syntax so it can translate more complicated code.
|
losing
|
## Why was the Vortex created?
**The Vortex** was created to catch **every** angle of an experience, so you don't miss out on anything.
## What does the Vortex do?
**Vortex is a smart social network** that allows people to share their best moments captured in an event by uploading videos of their perspectives of the occasion. The web platform uses machine learning to find the best viral moments in a collection of videos and then automatically generates a highlight video for an **entire community or event**.
## How did the Vortex team created the Vortex?
The Vortex and its smart social networking features were created with continuous care and delicacy. Its front-end was meticulously crafted using HTML, CSS, and Javascript. All the icons and logos were also created **from scratch** using Adobe Photoshop, Illustrator, and Premiere.
The Vortex's back-end was mesmerizingly built using a Flask server on top of Python and Google Cloud App Engine. All of the user data, including event details, video details, and posts are securely stored on Google Cloud Datastore. The video and gif files are stored in a Google Storage Bucket with triggers to Cloud Functions. Every time, a user uploads a video in The Vortex, a Google Cloud Function is triggered and The Vortex begins analyzing the video with Google Cloud Video Intelligence to find relevant labels and their respective timestamps.
After the video is analyzed, we save a gif thumbnail of the video and The Vortex saves all the data inside Algolia API for easy search and indexing. After The Vortex reaches enough clips, it uses the machine learning data in Algolia API to automatically create a video in the cloud using the VEED API.
## What were the challenges that the Vortex team came across?
Continuously analyzing the shared videos from people all over the world in the Vortex is no easy task. At first, we thought we wouldn't be able to accomplish that in a timely manner because of the amount of **video analysis and artificial intelligence research and applied machine learning study** needed; however, we proved ourselves wrong and the **Vortex is now alive** in a cloud somewhere in the world for anyone to watch and share!
The hardest part of the project was utilizing the Cloud Functions with the Storage Triggers combined with Google's Video Intelligence API. It was our first time dealing with Cloud Functions so we had to deploy our code multiple times before it finally worked. In addition to that, The Vortex generated so much data that the Algolia API was giving us errors for passing the limit. As if this was not enough, the resources for cloud video editing are very scarce so we had to spend a lot of time trying to come up with an automated solution to bring the highest quality to The Vortex.
## What accomplishments is the Vortex team proud of?
Since the beginning of the Vortex and its social networking ideas, we aimed at providing a user interface and experience that would make people feel as if **they are contributing to something bigger than themselves**, and the Vortex team is proud to believe that we accomplished such task. Also, as we said above, researching video analysis and applying artificial intelligence to study shared videos in the Vortex in such a short amount of time brings us countless smiles on our faces :)
## What we learned by creating The Vortex?
The Vortex team reinforced to themselves that if you put your mind at a direction, think obsessively about it, and, most importantly, act upon your ambitions... boy, **you can accomplish anything**. \_ The Vortex is an inherent proof of that \_.
## What's next for The Vortex?
The Vortex has an innate capability to spread throughout the world. We are aiming to improve the **Vortex's sharing capabilities**, where you can share specific perspectives from others in the Vortex with others outside of the Vortex. The Vortex team is also looking forward to improving our overall machine learning algorithms and allowing users to create their **own local Vortexes for their own shareable memories**.
|
## Inspiration'
With the rise of IoT devices and the backbone support of the emerging 5G technology, BVLOS drone flights are becoming more readily available. According to CBInsights, Gartner, IBISworld, this US$3.34B market has the potential for growth and innovation.
## What it does
**Reconnaissance drone software that utilizes custom object recognition and machine learning to track wanted targets.** It performs close to real-time speed with nearly 100% accuracy and allows a single operator to operate many drones at once. Bundled with a light sleek-designed web interface, it is highly inexpensive to maintain and easy to operate.
**There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. Identified targets are tagged and sent to an operator that is operating several drones at a time. This information can then be relayed to the appropriate parties.**
## How I built it
There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. This runs on a Python script that then sends the information to a backend server built using NodeJS (coincidentally also running on the Dragonboard for the demo) to do processing and to use Microsoft Azure to identify the potential targets. Operators use a frontend to access this information.
## Challenges I ran into
Determining a way to reliably demonstrate this project became a challenge considering the drone is not moving and the GPS is not moving as well during the demonstration. The solution was to feed the program a video feed with simulated moving GPS coordinates so that the system believes it is moving in the air.
The training model also required us to devote multiple engineers to spending most of their time training the model over the hackathon.
## Accomplishments that I'm proud of
The code flow is adaptable to virtually an infinite number of scenarios with virtually **no hardcoding for the demo** except feeding it the video and GPS coordinates rather than the camera feed and actual GPS coordinates
## What I learned
We learned a great amount on computer vision and building/training custom classification models. We used Node.js which is a highly versatile environment and can be configured to relay information very efficiently. Also, we learned a few javascript tricks and some pitfalls to avoid.
## What's next for Recognaissance
Improving the classification model using more expansive datasets. Enhancing the software to be able to distinguish several objects at once allowing for more versatility.
|
## Inspiration
With an ever-increasing rate of crime, and internet deception on the rise, Cyber fraud has become one of the premier methods of theft across the world. From frivolous scams like phishing attempts, to the occasional Nigerian prince who wants to give you his fortune, it's all too susceptible for the common person to fall in the hands of an online predator. With this project, I attempted to amend this situation, beginning by focusing on the aspect of document verification and credentialization.
## What does it do?
SignRecord is an advanced platform hosted on the Ethereum Inter-Planetary File System (an advanced peer-to-peer hyper media protocol, built with the intentions of making the web faster, safer, and more open). Connected with secure DocuSign REST API's, and the power of smart contracts to store data, SignRecord acts as an open-sourced wide-spread ledger of public information, and the average user's information. By allowing individuals to host their data, media, and credentials on the ledger, they are given the safety and security of having a proven blockchain verify their identity, protecting them from not only identity fraud but also from potential wrongdoers.
## How I built it
SignRecord is a responsive web app backed with the robust power of both NodeJS and the Hyperledger. With authentication handled by MongoDB, routing by Express, front-end through a combination of React and Pug, and asynchronous requests through Promise it offers a fool-proof solution.
Not only that, but I've also built and incorporated my own external API, so that other fellow developers can easily integrate my platform directly into their applications.
## Challenges I ran into
The real question should be what Challenge didn't I run into. From simple mistakes like missing a semi-colon, to significant headaches figuring out deprecated dependencies and packages, this development was nothing short of a roller coaster.
## Accomplishments that I'm proud of
Of all of the things that I'm proud of, my usage of the Ethereum Blockchain, DocuSign API's, and the collective UI/UX of my application stand out as the most significant achievements I made in this short 36-hour period. I'm especially proud, that I was able to accomplish what I could, alone.
## What I learned
Like any good project, I learnt more than I could have imagined. From learning how to use advanced MetaMask libraries to building my very own API, this journey was nothing short of a race with hurdles at every mark.
## What's next for SignRecord
With the support of fantastic mentors, a great hacking community, and the fantastic sponsors, I hope to be able to continue expanding my platform in the near future.
|
partial
|
## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms.
|
## Problem
In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult.
## Solution
To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together.
## About
Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface.
We made it one page to have access to all the tools on one screen and transition between them easier.
We identify this page as a study room where users can collaborate and join with a simple URL.
Everything is Synced between users in real-time.
## Features
Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers.
## Technologies you used for both the front and back end
We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads.
## Challenges we ran into
A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions.
## What's next for Study Buddy
While we were working on this project, we came across several ideas that this could be a part of.
Our next step is to have each page categorized as an individual room where users can visit.
Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic.
Include interface customizing options to allow User’s personalization of their rooms.
Try it live here: <http://35.203.169.42/>
Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down>
Thanks for checking us out!
|
## Inspiration
Our inspiration for this project is that we know many people who have trouble with their mental health and being productive. We wanted to help them receive useful answers to any questions they have about how to better their well-being, and enable them to positively impact their lives.
## What it is
MindHack is a React web application served by a Flask backend and deployed on a CentOS server with Docker and NGINX.
## Purpose
Its purpose is to help positively impact people dealing with mental health and productivity issues. MindHack uses OpenAI API and SerpAPI to answer user inputted prompts, and give out suggestions based off that prompt, as well as answers to the suggestions.
## How we built it
MindHack is built using OpenAI API and SerpAPI to build responses based off the user's prompt, Flask for the backend framework, ReactJS for the frontend framework, CentOS for the server, containerized by Docker and server with NGINX: <https://mindhack.samthibault.live>
## Challenges we ran into
* Formatting the data properly
## Accomplishments that we're proud of
* Deploying the application to a live and secure server
* Having a nice and user-friendly UI design
## What we learned
* How to use parallel processing in Python
* How to use OpenAI API and SerpAPI
## What's next for MindHack
* Vertically scaling the deployment environment
|
winning
|
## What Does "Catiator" Mean?
**Cat·i·a·tor** (*noun*): Cat + Gladiator! In other words, a cat wearing a gladiator helmet 🐱
## What It Does
*Catiator* is an educational VR game that lets players battle gladiator cats by learning and practicing American Sign Language. Using finger tracking, players gesture corresponding letters on the kittens to fight them. In order to survive waves of fierce but cuddly warriors, players need to leverage quick memory recall. If too many catiators reach the player, it's game over (and way too hard to focus with so many chonky cats around)!
## Inspiration
There are approximately 36 million hard of hearing and deaf individuals live in the United States, and many of them use American Sign Language (ASL). By learning ASL, you'd be able to communicate with 17% more of the US population. For each person who is hard of hearing or deaf, there are many loved ones who hope to have the means to communicate effectively with them.
### *"Signs are to eyes as words are to ears."*
As avid typing game enthusiasts who have greatly improved typing speeds ([TypeRacer](https://play.typeracer.com/), [Typing of the Dead](https://store.steampowered.com/agecheck/app/246580/)), we wondered if we could create a similar game to improve the level of understanding of common ASL terms by the general populace. Through our Roman Vaporwave cat-gladiator-themed game, we hope to instill a low barrier and fun alternative to learning American Sign Language.
## Features
**1. Multi-mode gameplay.**
Learn the ASL alphabet in bite sized Duolingo-style lessons before moving on to "play mode" to play the game! Our in-app training allows you to reinforce your learning, and practice your newly-learned skills.
**2. Customized and more intuitive learning.**
Using the debug mode, users can define their own signs in Catiator to practice and quiz on. Like Quizlet flash cards, creating your own gestures allows you to customize your learning within the game. In addition to this, being able to see a 3D model of the sign you're trying to learn gives you a much better picture on how to replicate it compared to a 2D image of the sign.
## How We Built It
* **VR**: Oculus Quest, Unity3D, C#
* **3D Modeling & Animation**: Autodesk Maya, Adobe Photoshop, Unity3D
* **UX & UI**: Figma, Unity2D, Unity3D
* **Graphic Design**: Adobe Photoshop, Procreate
## Challenges We Ran Into
**1. Limitations in gesture recognition.** Similar gestures that involve crossing fingers (ASL letters M vs. N) were limited by Oculus' finger tracking system in differential recognition. Accuracy in finger tracking will continue to improve, and we're excited to see the capabilities that could bring to our game.
**2. Differences in hardware.** Three out of four of our team members either own a PC with a graphics card or an Oculus headset. Since both are necessary to debug live in Unity, the differences in hardware made it difficult for us to initially get set up by downloading the necessary packages and get our software versions in sync.
**3. Lack of face tracking.** ASL requires signers to make facial expressions while signing which we unfortunately cannot track with current hardware. The Tobii headset, as well as Valve's next VR headset both plan to include eye tracking so with the increased focus on facial tracking in future VR headsets we would better be able to judge signs from users.
## Accomplishments We're Proud Of
We're very proud of successfully integrating multiple artistic visions into one project. From Ryan's idea of including chonky cats to Mitchell's idea of a learning game to Nancy's vaporwave aesthetics to Jieying's concept art, we're so proud to see our game come together both aesthetically and conceptually. Also super proud of all the ASL we learned as a team in order to survive in *Catiator*, and for being a proud member of OhYay's table1.
## What We Learned
Each member of the team utilized challenging technology, and as a result learned a lot about Unity during the last 36 hours! We learned how to design, train and test a hand recognition system in Unity and build 3D models and UI elements in VR.
This project really helped us have a better understanding of many of the capabilities within Oculus, and in utilizing hand tracking to interpret gestures to use in an educational setting. We learned so much through this project and from each other, and had a really great time working as a team!
## Next Steps
* Create more lessons for users
* Fix keyboard issues so users can define gestures without debug/using the editor
* Multi-hand gesture support
* Additional mini games for users to practice ASL
## Install Instructions
To download, use password "Treehacks" on <https://trisol.itch.io/catiators>, because this is an Oculus Quest application you must sideload the APK using Sidequest or the Oculus Developer App.
## Project Credits/Citations
* Thinker Statue model: [Source](https://poly.google.com/u/1/view/fEyCnpGMZrt)
* ASL Facts: [ASL Benefits of Communication](https://smackhappy.com/2020/04/asl-benefits-communication/)
* Music: [Cassette Tape by Blue Moon](https://youtu.be/9lO_31BP7xY) |
[RESPECOGNIZE by Diamond Ortiz](https://www.youtube.com/watch?v=3lnEIXrmxNw) |
[Spirit of Fire by Jesse Gallagher](https://www.youtube.com/watch?v=rDtZwdYmZpo) |
* SFX: [Jingle Lose](https://freesound.org/people/LittleRobotSoundFactory/sounds/270334/) |
[Tada2 by jobro](https://freesound.org/people/jobro/sounds/60444/) |
[Correct by Eponn](https://freesound.org/people/Eponn/sounds/421002/) |
[Cat Screaming by InspectorJ](https://freesound.org/people/InspectorJ/sounds/415209/) |
[Cat2 by Noise Collector](https://freesound.org/people/NoiseCollector/sounds/4914/)
|
## **1st Place!**
## Inspiration
Sign language is a universal language which allows many individuals to exercise their intellect through common communication. Many people around the world suffer from hearing loss and from mutism that needs sign language to communicate. Even those who do not experience these conditions may still require the use of sign language for certain circumstances. We plan to expand our company to be known worldwide to fill the lack of a virtual sign language learning tool that is accessible to everyone, everywhere, for free.
## What it does
Here at SignSpeak, we create an encouraging learning environment that provides computer vision sign language tests to track progression and to perfect sign language skills. The UI is built around simplicity and useability. We have provided a teaching system that works by engaging the user in lessons, then partaking in a progression test. The lessons will include the material that will be tested during the lesson quiz. Once the user has completed the lesson, they will be redirected to the quiz which can result in either a failure or success. Consequently, successfully completing the quiz will congratulate the user and direct them to the proceeding lesson, however, failure will result in the user having to retake the lesson. The user will retake the lesson until they are successful during the quiz to proceed to the following lesson.
## How we built it
We built SignSpeak on react with next.js. For our sign recognition, we used TensorFlow with a media pipe model to detect points on the hand which were then compared with preassigned gestures.
## Challenges we ran into
We ran into multiple roadblocks mainly regarding our misunderstandings of Next.js.
## Accomplishments that we're proud of
We are proud that we managed to come up with so many ideas in such little time.
## What we learned
Throughout the event, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using javascript and react. Additionally, throughout the workshops, we learned about the blockchain and entrepreneurship connections to coding for the overall benefit of the hackathon.
## What's next for SignSpeak
SignSpeak is seeking to continue services for teaching people to learn sign language. For future use, we plan to implement a suggestion box for our users to communicate with us about problems with our program so that we can work quickly to fix them. Additionally, we will collaborate and improve companies that cater to phone audible navigation for blind people.
|
# The Metaverse is the next big computing platform
>
> ### We believe it's essential to consider the needs and requirements of everyone on this journey to the future
>
>
>
## Building an more inclusive Metaverse
For HackHarvard, we developed VR app backed by a custom a ASL Neural Network and Speech-To-Text Engine powered by AssemblyAI to help people with disabilities access more online.
## Description
Our app, running locally on a Meta Quest 2, uses sockets to communicate scanned 3D hand positions over a TCP connection from the Quest to the Server running the ASL Classification Model and Speech-To-Text engine, then returning the outputs back to the Quest to display the visual VR effects.
## Live ASL Translation

We built a custom neural network to enable deaf mute people to communicate effectively in the Metaverse. This model captures hundreds of hand data points in real time to analyze and determine live which ASL signs people make, then converts them to text and returns it for others to see.
## Live Speech-To-Text

Capturing sound from the Metaverse and local microphones, we pass the audio to AssemblyAI's Speech-To-Text API and return the translation, enabling easy access for those hard of hearing.
## Live Multiplayer Conversation

To help people interested in learning sign language, we've integrated Google's GPT-3 into our project. GPT-3 will read your ASL to Text translated conversations and reply accordingly, a perfect practice partner.
## Challenges
* Capturing and modeling hand data in 3D point spaces
* High bandwidth socket connections
* Neural Network learning and accuracy
* Rendering virtual environments with high FPS and fidelity
## Next Steps
We're excited about the progress we've made in two short days and are looking forward to expanding the project by integrating it with apps like Horizon Worlds, VR Chat, and other popular platforms.
In addition, a future improvement is converting our model from ASL-To-Text to ASL-To-Speech. Doing so would greatly improve the inclusiveness of deaf mute people in everyday Metaverse interactions.
|
winning
|
## Inspiration
I was inspired to create this application after hearing about someone's struggles on losing their autonomy after losing their vision. I decided to search for a way using technology that might be able to help them.
## What it does
Autonomi is a web and mobile application that uses machine learning to scan products and help the user purchase products without the use of a physical checkout or external assistance. The user simply scans a product to determine what it is, can choose to add that product to their cart and when leaving the store will checkout through the app receive an invoice sent from checkbook.io
## How we built it
Autonomi is primarily built using React Native and Expo in JavaScript, I made the choice to use React Native because I thought it would give me the ability to iterate quickly and build for multiple platforms at the same time which is especially useful at a hackathon. In addition, the machine learning image labeling is done using the Google Cloud Vision API. The images are first uploaded to a Firebase storage bucket before the Cloud Vision API is called. In order to complete the checkout experience, I used the checkbook.io API in order to programmatically create invoice requests that are sent the the customer's email address.
## Challenges we ran into
I had a lot of trouble with the initial setup of the android emulator. As I had primarily had experience in backend web development, however after lots of debugging I eventually got to the root of the issue. In addition, I wasn't sure where to store the images before sending them for processing at the Cloud Vision API, however thanks to the easy onboarding time on Firebase I was able to effectively deal with this issue.
## Accomplishments that we're proud of
I'm proud that I was able to make a functioning cross-platform app in less than a weekend having had no prior experience in this field. I'm also happy that the Cloud Vision API seemed to be accurate on clear images which demonstrated that there was some viability to this project.
## What we learned
I learned a lot through the course of this project, this was my first time working with both Firebase and Cloud Vision so I had some fun onboarding with those 2 technologies. I also learned about how useful android emulation can be on Windows, and how quickly cross-platform development can be started.
## What's next for Autonomi
* Adding QR codes associated with different stores so that a customer can scan a QR code to get access to a store's prices and checkout.
* Image detection going beyond products, this could include people, documents, change, etc.
|
## Inspiration
The majority of cleaning products in the United States contain harmful chemicals. Although many products pass EPA regulations, it is well known that many products still contain chemicals that can cause rashes, asthma, allergic reactions, and even cancer. It is important that the public has easy access to information of the chemicals that may be harmful to them as well as the environment.
## What it does
Our app allows users to scan a product's ingredient label and retrieves information on which ingredients to avoid for the better of the environment as well as their own health.
## How we built it
We used Xcode and Swift to design the iOS app. We then used Vision for iOS to detect text based off a still image. We used a Python scrapper to collect data from ewg.org, providing the product's ingredients as well as the side affects of certain harmful additives.
## Challenges we ran into
We had very limited experience for the idea that we have in developing an iOS app but, we wanted to challenge ourselves. The challenges the front end had were incorporating the camera feature and the text detector into a single app. As well as trying to navigate the changes between the newer version of Swift 11 from older versions. For our backend members, they had difficulties incorporating databases from Microsoft/Google, but ended up using JSON.
## Accomplishments that we're proud of
We are extremely proud of pushing ourselves to do something we haven't done before. Initially, we had some doubt in our project because of how difficult it was. But, as a team we were able to help each other along the way. We're very proud of creating a single app that can do both a camera feature and Optical Character Recognition because as we found out, it's very complicated and error prone. Additionally, for data scrapping, even though the HTML code was not consistent we managed to successfully scrap the necessary data by taking all corner cases into consideration with 100% success rate from more than three thousand HTML files and we are very proud of it.
## What we learned
Our teammates working in the front end learned how to use Xcode and Swift in under 24 hours. Our backend team members learned how to scrap the data from a website for the first time as well. Together, we learned how to alter our original expectations of our final product based on time constraint.
## What's next for Ingredient Label Scanner
Currently our project is specific to cleaning products, however in the future we would like to incorporate other products such as cosmetics, hair care, skin care, medicines, and food products. Additionally, we hope to alter the list of ingredients in a more visual way so that users can clearly understand which ingredients are more dangerous than others.
|
## Inspiration
Getting motivated to volunteer can be challenging, especially when searching for opportunities. Many organizations require phone contact only and have limited hours. Even after signing up for an event, volunteers often feel disconnected from the people they serve due to various organizations' policies. We see an opportunity to create a solution that not only simplifies volunteering but also fosters a culture of open communication.
## What It Does
HelpJawn is a unified platform that connects volunteers, organizations, and those in need. Organizations can post events requiring volunteers, which also serves as a bulletin board for clients seeking services. Additionally, we aim to create a sense of community by empowering clients to share messages about past events, allowing volunteers to see their impact and encouraging them to return.
## How We Built It
We used Python, Django, and SQL for the backend, and TypeScript, React, Vite, and Bootstrap for the frontend.
## Challenges We Ran Into
We encountered issues related to CORS, which was an interesting learning experience. The biggest challenge was coordinating the necessary API endpoints with the frontend before building out the UI. As we are new to writing APIs, this was a valuable learning opportunity in best practices.
## Accomplishments We're Proud Of
Initially, we were uncertain about how the project would progress when connecting the backend and frontend. We worked in separate teams for a while, but when we came together to integrate everything, it went surprisingly smoothly.
## What We Learned
We learned about CORS, developing RESTful APIs, and how to manage scope creep.
## What's Next for HelpJawn
We had to set aside many ideas, but a great next step would be to implement a notification system for volunteers to see when clients post after an event. We also want to introduce metrics for users to track their impact. On the technical side, we plan to migrate to a database on AWS S3 and host a web server on EC2.
|
partial
|
## Inspiration
Our inspiration came from brainstorming about inventions that are not yet available to the disabled, and how we could help them. Other friends that mentioned how they know seniors who have trouble with using technology and conducting everyday tasks. This brought about thought on how seniors are sometimes excluded from consideration when new inventions or technologies are created, so we wanted to create a project that could help seniors, specifically blinded seniors, to aid them in their everyday life with simple designs. To assist those with vision impairment, we wanted to create something that can alert them through the use of other senses, specifically sound. If they are in danger, then the device would also alert those around them through the use of sound and text.
## What it does
The Navigation Essential Watch ha 3 main functions. Firstly, it detects if a disabled or blind person's path has become too close to a nearby obstacle with an ultrasonic sensor, which alerts the user through a buzzer. Secondly, the N.E.W has a heart rate pulse detector which displays the heart rate of the user on a digital display. Thirdly, the N.E.W has a display screen which shows the user's emergency contact information, but only when they are immobilized (fallen over). That contact information is only displayed when the watch detects that the user is demobilized through the use of a breaker sensor.
## How we built it
The whole system was made in Arduino components, owned by one of the teammates. We went to buy external sensors and then put together a full functioning system that is comprised of two Arduinos: one responsible for the display, motion detection, distance detection, and one responsible for the heart rate tracking and display.
## Challenges we ran into
One significant challenge was that we were unable to acquire the components we wanted, since the electronics store that had the necessary components was closed. As such, we used Arduinos, which limited our possibilities. Additionally, it was challenging to get the wiring of the entire circuit correct. Since our device is a prototype, the wires do not remain stable in the circuit and would fall out easily. That could be an area of improvement in the future. Another challenge was that the wires connected to the heart rate pulse sensor broke as we were wiring everything together. Fortunately, the sensor was functioning properly before it was broken, but it was still difficult to find a remedy for that.
## Accomplishments that we're proud of
We are certainly proud of the prototype we built as it encapsulates all the functioning features that we envisioned. We are proud that we were able to use most of the external sensors we purchased, and for some of our teammates, it was their first time working with Arduino, which is a significant accomplishment for them to learn this new programming and prototyping skill.
## What we learned
We learned that we should be a bit more prepared before the hackathon in terms of acquiring materials so we don't have to spend valuable working time on buying parts or sensors. In terms of the project, we learned how to use sensors and create program with Arduinos, and also learned how to trouble shoot errors consistently since we ran into errors both in our program or in our wiring, hence it was a valuable learning experience for all teammates
## What's next for Navigation Essential Watch
We believe the next stage would certainly be moving away from prototyping and trying to make the entire system more compact and wearable. This includes perhaps improving the circuit structure of the watch for simplification purposes. We could also consider adding more functionality to the watch to make it more user friendly, adding wireless connections, location tracking for safety, maybe even car and human detection. These are all possible extensions to the N.E.W.
|
## Inspiration
What inspired us was we wanted to make an innovative solution which can have a big impact on people's lives. Most accessibility devices for the visually impaired are text to speech based which is not ideal for people who may be both visually and auditorily impaired (such as the elderly). To put yourself in someone else's shoes is important, and we feel that if we can give the visually impaired a helping hand, it would be an honor.
## What it does
The proof of concept we built is separated in two components. The first is an image processing solution which uses OpenCV and Tesseract to act as an OCR by having an image input and creating a text output. This text would then be used as an input to the second part, which is a working 2 by 3 that converts any text into a braille output, and then vibrate specific servo motors to represent the braille, with a half second delay between letters. The outputs were then modified for servo motors which provide tactile feedback.
## How we built it
We built this project using an Arduino Uno, six LEDs, six servo motors, and a python file that does the image processing using OpenCV and Tesseract.
## Challenges we ran into
Besides syntax errors, on the LED side of things there were challenges in converting the text to braille. Once that was overcome, and after some simple troubleshooting for menial errors, like type comparisons, this part of the project was completed. In terms of the image processing, getting the algorithm to properly process the text was the main challenge.
## Accomplishments that we're proud of
We are proud of having completed a proof of concept, which we have isolated in two components. Consolidating these two parts is only a matter of more simple work, but these two working components are the fundamental core of the project we consider it be a start of something revolutionary.
## What we learned
We learned to iterate quickly and implement lateral thinking. Instead of being stuck in a small paradigm of thought, we learned to be more creative and find alternative solutions that we might have not initially considered.
## What's next for Helping Hand
* Arrange everything in one android app, so the product is cable of mobile use.
* Develop neural network so that it will throw out false text recognitions (usually look like a few characters without any meaning).
* Provide API that will be able to connect our glove to other apps, where the user for example may read messages.
* Consolidate the completed project components, which is to implement Bluetooth communication between a laptop processing the images, using OpenCV & Tesseract, and the Arduino Uno which actuates the servos.
* Furthermore, we must design the actual glove product, implement wire management, an armband holder for the uno with a battery pack, and position the servos.
|
## Inspiration
Assistive Tech was our asigned track, we had done it before and knew we could innovate with cool ideas.
## What it does
It adds a camera and sensors which instruct a pair of motors that will lightly pull the user in a direction to avoid a collision with an obstacle.
## How we built it
We used a camera pod for the stick, on which we mounted the camera and sensor. At the end of the cane we joined a chasis with the motors and controller.
## Challenges we ran into
We had never used a voice command system, paired with a raspberry pi and also an arduino, combining all of that was a real challenge for us.
## Accomplishments that we're proud of
Physically completing the cane and also making it look pretty, many of our past projects have wires everywhere and some stuff isn't properly mounted.
## What we learned
We learned to use Dialog Flow and how to prototype in a foreign country where we didn't know where to buy stuff lol.
## What's next for CaneAssist
As usual, all our projects will most likely be fully completed in a later date. And hopefully get to be a real product that can help people out.
|
losing
|
## Inspiration
People love volunteering. It’s a great way to go to the outdoors, meet people in your neighborhood, and improve your community. At the same time, there are a lot of people who need volunteers: local leaders spearheading a new environmental initiative , students building nonprofits to help their community, or an impassioned citizen just trying to clean up the parks for a weekend.
However, despite this strong supply of volunteers, and this strong need for them, linking up volunteers with initiatives is extremely difficult. While social media avenues, like Facebook or Twitter, can be used to post your weekend trash pick up, they are simply too busy. Their platforms are too generalized and have more content on cats than finding opportunities for volunteering.
So thus, we asked ourselves, how do we fix this problem? How do we support sustainability groups–picking up garbage and planting new trees–-all while making it easier for citizens to find these initiatives and give a hand.
Well this weekend, we built a Service Uniting Students for Sustainability Initiatives. Introducing, SUSSI: Volunteering Made Easy.
## What it does
SUSSI is an online platform connecting sustainability events, like trash collection and tree planting, with volunteers seeking to improve their community. Our goal is to make it as easy as possible for people with an afternoon free to find an avenue to help their community.
On SUSSI, volunteers can see an event, their location, time, and purpose, allowing for swift decision making. On the other hand, passionate organizers can easily create events for volunteers to see, allowing them to increase initiative participation, and thereby community engagement. Some common events could be trash collections in densely populated areas, lending a hand in a food drive, or helping plant trees in a local park.
Aside from just linking people together, SUSSI also handles all the logistics of organizing. One of the biggest draws for volunteers is having regonzinced volunteering hours. However, for an organizer, keeping track of hours and having them issued is a logistical nightmare. SUSSI does volunteering hours, and issuing, all automatically.
SUSSI also gamifies volunteering, introducing a community leaderboard inspiring other people to help the community. Overall, SUSSI is building strong relationships in local communities, simplifying the search for volunteering, and, most importantly, fostering an environment of sustainability throughout a community.
## How we built it
The front-end website was built in React.JS and the backend server is powered by Django. Using the REST framework, client devices communicate to our servers, transacting information through our Postgres database. All pages were made with love by us.
## Challenges we ran into
Building a fully functioning website has not been an easy journey. Even though we started smoothly, we first encountered issues when integrating back-end with front-end. Some features we wanted to include were not possible to implement in such a short time such as a fully functioning calendar displaying all events taking place and google-map level location-area searching. Something all of us will certainly take into consideration next time is to more strictly follow a set-up plan. As we became more and more excited about the project, functionality issues started piling on top of us. We often would tackle one feature or bug, but then switch to another midway through. For example, we didn't realize that the profile page was not working until 5 hours from submission time. Near the end however, we started planning out what we need to do and focusing on fully completing features.
Lastly, it was extremely difficult for us to find an adequate brand identity for the project. Even though we eventually managed to put all key features into place, it wasn’t until the last day of the hackathon that we finally got a color scheme together and a common style sheet integrated.
## Accomplishments that we're proud of
We are all proud of how well we formed te
am chemistry and a very supportive environment. We did not know each other prior to meeting in the welcoming seminar room, yet we spent the last 48 hours cracking jokes, listening to music, and sharing stories about our lives. We are also proud of ~~surviving~~ competing in the fire noodles competition. Even though three of us nearly died, Kevin managed to win a water bottle by sucking on the noodles in 10 seconds for which we are all very proud of him. We even managed to go out of our comfort zone by taking part in karaoke and singing Tay Tay songs into the night.
From a more project aspect, we are proud of the branding scheme we came up with, and how well the color mesh together. We hope that by creating a friendly brand identity, we streamline the user experience and increase connections between volunteers and organizers.
## What we learned
This hackathon was a first for many reasons. Three out of four teammates have never been to a hackathon, making it both an exciting and nerve racking experience for the team. We did not know what we were going into, but despite our hesitations, we had an amazing time. Likewise, we each tried to push our boundaries as programmers and thinkers, picking up new tech stacks (Django, React) and isolating core issues in our community. In fact, ¼ of our team came into this hackathon with little to zero programming experience. Despite this, Tom learned how to use OpenCV for creating our volunteer hour certificate, and picked up a bit of React, designing our landing, about, and profile pages.
## What's next for S.U.S.S.I.
We believe SUSSI is a viable product that could be implemented for practical use in communities across the US and globally. Our website presents itself as a minimum viable product upon which further improvements can be made from initial user inputs and opinions. We hope that we will have the opportunity to collaborate further on building such a platform which significantly contributes to sustainability and mitigating climate change worldwide.
|
## Inspiration
We love volunteering and would love to see more inclusivity of events in our community.
## What it does
VoluntEasy is a platform designed to make volunteering easy. Users start by registering for an account and signing in. Users can host their own volunteering events or join ones hosted by others. Each event has a title, date and time, location, description They can view a list of events they are a part of, and click to view specific details about an individual event.
## How we built it
MERN stack (MongoDB, Express, React, Node.js)
Google Maps API
Google Places API
## Challenges we ran into
Promises/Asynchronous programming
## Accomplishments that we're proud of
* Used the Google Maps and Places APIs to create an autocomplete search bar of locations. Then, use the coordinates from that location to pinpoint a marker on a map to show the location of each individual event.
* Creating an intuitive and aesthetic UI using HTML and CSS. Used colour palettes and drop-shadow to create the illusion of depth.
## What we learned
Web dev isn't for everyone
|
## Inspiration
*Do you have a habit that you want to fix?*
We sure do. As high school students studying for exams, we noticed we were often distracted by our phones, which greatly reduced our productivity. A study from Duke University found that up to 45% of all our daily actions are performed habitually, which is a huge problem especially during a time when many of us are confined to our homes, negatively impacting our productivity, as well as mental and physical health.
To fix this issue, we created HabiFix. We took the advice from a Harvard research paper to create a program that would not only help break unhealthy habits, but form healthy ones in place as well.
## What it does
Unlike many other products which have to be installed by professionals, highly specialized for one single habit, or just expensive, HabiFix only requires a computer with a webcam, and can help you fix a multitude of different habits. And the usage is very simple too, just launch HabiFix on your computer, and that’s it! HabiFix will run in the background, and as soon as you perform an undesirable habit, it will remind you. According to Harvard Health Publishing, the most important thing in habit fixing is a reminder, since people often perform habits without realizing it. So when you’re studying for tomorrow’s test and pick up your phone, your computer will gently remind you to get off your phone, so you can ace that test.
Every action you do is uploaded to our website, which users can see statistics of by logging in. Another important aspect of Habit Fixing that Harvard found is reward, which we believe we can provide users by showing them their growth over time. On the website, users are able to view how many times they had to be reminded, and by showing them how they have been requiring less reminders throughout the week, they’ll be able to know they have been fixing their habits.
## How we built it
The ML aspect of our project uses Tensorflow and openCV, more specifically an object detection library to capture the user’s actions. We wrote a program that would use OpenCV to provide webcam data to TensorFlow, which would output the user’s position relative to other objects, then analyzed by our Python code to determine if a user is performing a specific task. We then created a flask server which converts the analyzed data into JSON, stores it in our database, allowing our website to fetch the data. The HabiFix web app is built with React, and Chart.js was used to display data that was collected.
## Challenges we ran into
The biggest challenge we ran into was incorporating the machine learning aspect in it, as it was our first time using TensorFlow. While setting up the object detection algorithm using TensorFlow, we had difficulties installing all the dependencies and modules, and spent quite some time properly understanding the TensorFlow documentation which was needed to get outputs for analysis. However, after sleepless nights and a newfound love for coffee, we were able to finish setting up TensorFlow and write a program to extract the data and analyze it, which worked better than we thought it would, catching our developers on their phones even during development.
## Accomplishments that we're proud of
We’re quite proud of the accuracy that our program has in detecting habits and believe it is the key reason why this program will be so effective. So far, unless you make a conscious effort to hide from the camera, which wouldn’t be the case for those wanting to remove a habit, the program will detect the habit almost instantly. The fact that our program caught us off guard on our phones during development is a clear indicator that our program does what it’s supposed to, and we hope to use this tool ourselves to continue development and break our own bad habits.
## What we learned
Our team pretty much learnt everything we had to use for this project. The only tools that our team were familiar with were basic HTML/CSS and Python, which not all the members knew how to use. Throughout development, we learnt a lot about frontend, backend, and database development, and TensorFlow is definitely a tool we’re happy to have learnt.
## What's next for HabiFix
In the future, we hope to add to our list of habits that we can detect, and possibly create a mobile application to track habits even when users are away from their computer. We believe this idea has serious potential for preventing not only simple habits like biting nails, but also other habits such as drug and substance abuse and addiction.
|
losing
|
## Inspiration
The three of us believe that our worldview comes from what we read. Online news articles serve to be that engine, and for something so crucial as learning about current events, an all-encompassing worldview is not so accessible. Those new to politics and just entering the discourse may perceive an extreme partisan view on a breaking news to be the party's general take; On the flip side, those with entrenched radicalized views miss out on having productive conversations. Information is meant to be shared, perspectives from journals, big, and small, should be heard.
## What it does
WorldView is a Google Chrome extension that activates whenever someone is on a news article. The extension describes the overall sentiment of the article, describes "clusters" of other articles discussing the topic of interest, and provides a summary of each article. A similarity/dissimilarity score is displayed between pairs of articles so readers can read content with a different focus.
## How we built it
Development was broken into three components: scraping, NLP processing + API, and chrome extension development. Scraping involved using Selenium, BS4, DiffBot (API that scrapes text from websites and sanitizes), and Google Cloud Platform's Custom Search API to extract similar documents from the web. NLP processing involved using NLTK, KProtoype clustering algorithm. Chrome extension was built with React, which talked to a Flask API. Flask server is hosted on an AWS EC2 instance.
## Challenges we ran into
Scraping: Getting enough documents that match the original article was a challenge because of the rate limiting of the GCP API. NLP Processing: one challenge here was determining metrics for clustering a batch of documents. Sentiment scores + top keywords were used, but more robust metrics could have been developed for more accurate clusters. Chrome extension: Figuring out the layout of the graph representing clusters was difficult, as the library used required an unusual way of stating coordinates and edge links. Flask API: One challenge in the API construction was figuring out relative imports.
## Accomplishments that we're proud of
Scraping: Recursively discovering similar documents based on repeatedly searching up headline of an original article. NLP Processing: Able to quickly get a similarity matrix for a set of documents.
## What we learned
Learned a lot about data wrangling and shaping for front-end and backend scraping.
## What's next for WorldView
Explore possibility of letting those unable to bypass paywalls of various publishers to still get insights on perspectives.
|
## Inspiration
As the Coronavirus pandemic continues to impact our lives, students are forced to stay at home and deal with the difficulties that come with online learning.
Personally, we have struggled with connection issues, professors who speak unclearly, and noisy environments. We can only imagine what lectures are like for students who have a language barrier, struggle with hearing impairment, or do not have access to a quiet and comfortable learning environment.
For our hack we wanted to tackle this problem and create a tool that helps improve learning experiences and make classes more accessible for struggling students, with hopes of making a positive social impact by helping people communicate during a time filled with challenges and uncertainty.
## What it does
EasyCC is a chrome extension that provides real-time closed captioning for any audio source running from your computer. EasyCC supports all platforms including Zoom, Collaborate Ultra, Discord, Google Meet, and can even transcribe Youtube videos!
## How we built it
We first prototyped the UI in Figma and developed the front-end for the chrome extension using HTML and CSS. Using Node.js, we then integrated tools that allowed us to capture audio from the desktop and process speech into text using Google Cloud’s Speech-to-text engine. Using socket.io, we relayed the transcripts to our front-end to be displayed in real-time for the user.
## Challenges we ran into
Most of the issues that we ran into were related to the backend and its integration. In particular, setting up our software architecture was challenging because we need to continuously pass large amounts of data from the backend to the frontend, which requires us to have a good understanding of how the web works and how each component interacts with each other. Since calling the Google Speech To Text API must be done in the backend, we had to effectively integrate it to the frontend so that the transcribed messages are displayed approximately in real time. The main hurdle was the lag due to the constant calls between the frontend and the backend, which required us to integrate Socket.io into our codebase, another feat in and of itself. Initially, the audio stream would not record which we discovered to be a permission issue outside of our code, so we had to address that issue in order for the Google Speech To Text API to work. Oftentimes, the documentation for the APIs are hard to understand due to a lack of explanations and examples, so we had to engage in some trial and error and adapt the code to meet our needs in the application.
## Accomplishments that we're proud of
We are proud to have created an application that improves the experience of online lectures using off the shelf technology. We wanted to keep the application straightforward so that we can have it running quickly. Despite having little familiarity with web development and chrome extensions, we managed to create a frontend and backend, and more importantly, link these two together to create a functional application. In the process, we gained exposure to relevant web technologies and picked up researching skills, which is critical to software development. Also, we collaborated effectively to polish our ideas, offer different approaches to solving complex problems, and complement each other’s skills. Finally, we learned how to seek help from mentors effectively, being able to identify issues beyond the scope of our knowledge and research, and using the pointers they provided to devise an effective solution.
## What we learned
Since we all had little experience with web development, this was our first time using the relevant technologies in an integrated manner. In particular, connecting the frontend and the backend is a major challenge that we are proud to have completed, enabling us to better understand the architecture of web applications.
We learned a lot about the services we used, namely Chrome Extensions, Google Speech-To-Text API, Socket.io. This was our first time using these resources, and we are very happy with how we used them in our application.
Since our program is constantly communicating between the frontend and the backend, we decided to use Socket.io to facilitate these interactions as it is designed for instant messaging. This vastly improves the performance when displaying the transcribed message on the overlay compared to constantly making HTTP calls. Error diagnosis is a constant thing we dealt with when developing software, especially when incorporating unfamiliar APIs to our codebase. In particular, although the Google Speech to Text API seemed imposing upon first glance, we are able to read through the documentation, understand what the code is doing, and identify errors preventing the service from running correctly. This was a great experience to us and we have been exposed to several great services during this hackathon.
## What's next for EasyCC
EasyCC has a lot of potential to become a viable captioning service. We hope to add features that will improve our extension and make it even more accessible and useful. For one, we would like to use a translation API, which will connect users all over the world, allowing them to communicate and understand different languages. We could also potentially publish EasyCC onto the Chrome Web Store, so that our service is readily available to anybody.
|
## What it does
"ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points.
## How We built it
Our project is comprised of many interconnected components, which we detail below:
#### Formatting Engine
To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required.
#### Voice-to-speech
We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed.
#### Topic Analysis
Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification.
#### Image Scraping
Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen.
#### Graph Generation
Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time.
#### Sentence Segmentation
When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline.
#### Text Title-ification
Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title.
#### Text Summarization
When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous.
#### Mobile Clicker
Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets.
#### Internal Socket Communication
In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides.
## Challenges We ran into
* Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis.
* The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop.
## Accomplishments that we're proud of
* Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
* Working on an unsolved machine learning problem (sentence simplification)
* Connecting a mobile device to the laptop browser’s mic using WebSockets
* Real-time text analysis to determine new elements
## What's next for ImpromPPTX
* Predict what the user intends to say next
* Scraping Primary sources to automatically add citations and definitions.
* Improving text summarization with word reordering and synonym analysis.
|
partial
|
## Inspiration
In light of the ongoing global conflict between war-torn countries, many civilians face hardships. Recognizing these challenges, LifeLine Aid was inspired to direct vulnerable groups to essential medical care, health services, shelter, food and water assistance, and other deprivation relief.
## What it does
LifeLine Aid provides multifunctional tools that enable users in developing countries to locate resources and identify dangers nearby. Utilizing the user's location, the app alerts them about the proximity of a situation and centers for help. It also facilitates communication, allowing users to share live videos and chat updates regarding ongoing issues. An upcoming feature will highlight available resources, like nearby medical centers, and notify users if these centers are running low on supplies.
## How we built it
Originally, the web backend was to be built using Django, a trusted framework in the industry. As we progressed, we realized that the amount of effort and feasibility of exploiting Django were not sustainable; as we made no progress within the first day. Quickly turning to the in-depth knowledge of one of our team member’s extensive research into asyncio, we decided to switch to FastAPI, a trusted framework used by Microsoft. Using this framework had both its benefits and costs. Realizing after our first day, Django proved to be a roadblock, thus we ultimately decided to switch to FastAPI.
Our backend proudly uses CockroachDB, an unstoppable force to be reckoned with. CockroachDB allowed our code to scale and continue to serve those who suffer from the effects of war.
## Challenges we ran into
In order to pinpoint hazards and help, we would need to obtain, store, and reverse-engineer Geospatial coordinate points which we would then present to users in a map-centric manner. We initially struggled with converting the Geospatial data from a degree, minutes, seconds format to decimal degrees and storing the converted values as points on the map which were then stored as unique 50 character long SRID values. Luckily, one of our teammates had some experience with processing GeoSpatial data so drafting coordinates on a map wasn’t our biggest hurdle to overcome. Another challenge we faced were certain edge cases in our initial Django backend that resulted in invalid data. Since some outputs would be relevant to our project, we had to make an executive decision to change backend midway through. We decided to go with FastApi. Although FastApi brought its own challenge with processing SQL to usable data, it was our way of overcoming our Jango situation. One last challenge we ran into was our overall source control. A mixture of slow and unbearable WiFi, combined with tedious local git repositories not correctly syncing create some frustrating deadlocks and holdbacks. To combat this downtime, we resort to physically drafting and planning out how each component of our code would work.
## Accomplishments that we're proud of
Three out of the four in our team are attending their first hackathon. The experience of crafting an app and seeing the fruits of our labor is truly rewarding. The opportunity to acquire and apply new tools in our project has been exhilarating. Through this hackathon, our team members were all able to learn different aspects of creating an idea into a scalable application. From designing and learning UI/UX, implementing the React-Native framework, emulating iOS and Android devices to test and program compatibility, and creating communication between the frontend and backend/database.
## What we learned
This challenge aimed to dive into technologies that are used widely in our daily lives. Spearheading the competition with a framework trusted by huge companies such as Meta, Discord and others, we chose to explore the capabilities of React Native. Joining our team are three students who have attended their first hackathon, and the grateful opportunity of being able to explore these technologies have led us to take away a skillset of a lifetime.
With the concept of the application, we researched and discovered that the only best way to represent our data is through the usage of Geospatial Data. CockroachDB’s extensive tooling and support allowed us to investigate the usage of geospatial data extensively; as our backend team traversed the complexity and the sheer awe of the scale of said technology. We are extremely grateful to have this opportunity to network and to use these tools that would be useful in the future.
## What's next for LifeLine Aid
There are a plethora of avenues to further develop the app, which include enhanced verification, rate limiting, and many others. Other options include improved hosting using Azure Kubernetes Services (AKS), and many others. This hackathon project is planned to be maintained further into the future as a project for others who may be new or old in this field to collaborate on.
|
## Inspiration
In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol.
## What it does
Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating.
## How I built it
We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API.
## Challenges I ran into
Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of!
## Accomplishments that I'm proud of
We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity.
## What I learned
Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane.
## What's next for SafeHubs
Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
|
## Inspiration
One of the biggest roadblocks during disaster relief is reestablishing the first line of communication between community members and emergency response personnel. Whether it is the aftermath of a hurricane devastating a community or searching for individuals in the backcountry, communication is the key to speeding up these relief efforts and ensuring a successful rescue of those at risk.
In the event of a hurricane, blizzard, earthquake, or tsunami, cell towers and other communication nodes can be knocked out leaving millions stranded and without a way of communicating with others. In other instances where skiers, hikers, or travelers get lost in the backcountry, emergency personnel have no way of communicating with those who are lost and can only rely on sweeping large areas of land in a short amount of time to be successful in rescuing those in danger.
This is where Lifeline comes in. Our project is all about leveraging communication technologies in a novel way to create a new way to establish communication in a short amount of time without the need for prior existing infrastructures such as cell towers, satellites, or wifi access point thereby speeding up natural disaster relief efforts, search and rescue missions, and helping provide real-time metrics for emergency personnel to leverage.
Lifeline uses LoRa and Wifi technologies to create an on-the-fly mesh network to allow individuals to communicate with each other across long distances even in the absence of cell towers, satellites, and wifi. Additionally, Lifeline uses an array of sensors to send vital information to emergency response personnel to assist with rescue efforts thereby creating a holistic emergency response system.
## What it does
Lifeline consists of two main portions. First is a homebrewed mesh network made up of IoT and LoRaWAN nodes built to extend communication between individuals in remote areas. The second is a control center dashboard to allow emergency personnel to view an abundance of key metrics of those at risk such as heart rate, blood oxygen levels, temperature, humidity, compass directions, acceleration, etc.
On the mesh network side, Lifeline has two main nodes. A control node and a network of secondary nodes. Each of the nodes contains a LoRa antenna capable of communication up to 3.5km. Additionally, each node consists of a wifi chip capable of acting as both a wifi access point as well as a wifi client. The intention of these nodes is to allow users to connect their cellular devices to the secondary nodes through the local wifi networks created by the wifi access point. They can then send emergency information to response personnel such as their location, their injuries, etc. Additionally, each secondary node contains an array of sensors that can be used both by those in danger in remote communities or by emergency personnel when they venture out into the field so members of the control center team can view their vitals. All of the data collected by the secondary nodes is then sent using the LoRa protocol to other secondary nodes in the area before finally reaching the control node where the data is processed and uploaded to a central server. Our dashboard then fetches the data from this central server and displays it in a beautiful and concise interface for the relevant personnel to read and utilize.
Lifeline has several main use cases:
1. Establishing communication in remote areas, especially after a natural disaster
2. Search and Rescue missions
3. Providing vitals for emergency response individuals to control center personnel when they are out in the field (such as firefighters)
## How we built it
* The hardware nodes used in Lifeline are all built on the ESP32 microcontroller platform along with a SX1276 LoRa module and IoT wifi module.
* The firmware is written in C.
* The database is a real-time Google Firebase.
* The dashboard is written in React and styled using Google's Material UI package.
## Challenges we ran into
One of the biggest challenges we ran into in this project was integrating so many different technologies together. Whether it was establishing communication between the individual modules, getting data into the right formats, working with new hardware protocols, or debugging the firmware, Lifeline provided our team with an abundance of challenges that we were proud to tackle.
## Accomplishments that we're proud of
We are most proud of being able to have successfully integrated all of our different technologies and created a working proof of concept for this novel idea. We believe that combing LoRa and wifi in the way can pave the way for a new era of fast communication that doesn't rely on heavy infrastructures such as cell towers or satellites.
## What we learned
We learned a lot about new hardware protocols such as LoRa as well as working with communication technologies and all the challenges that came along with that such as race conditions and security.
## What's next for Lifeline
We plan on integrating more sensors in the future and working on new algorithms to process our sensor data to get even more important metrics out of our nodes.
|
partial
|
## Inspiration
Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb!
## What it does
The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge".
## How we built it
**The Explosive**
The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates.
**The Code**
Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation.
Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol.
The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication.
## Challenges we ran into
Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided.
Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process.
Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below:
## Accomplishments that we're proud of
During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module.
## What we learned
* Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules.
* Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development.
* Enhancing overall group performance is achieved by assigning individual tasks.
## What's next for Keep Hacking and Nobody Codes
* Ensure the elimination of any unwanted noises in the wiring between the main board and game modules.
* Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players.
* Release the game to a wider audience, allowing more people to enjoy and play it.
|
# Inspiration
Many cities in the United States are still severely behind on implementing infrastructure improvements to meet ADA (Americans with Disabilities Act) accessibility standards. Though 1 in 7 people in the US have a mobility-related disability, research has found that 65% of curb ramps and 48% of sidewalks are not accessible, and only 13% of state and local governments have transition plans for implementing improvements (Eisenberg et al, 2020). To make urban living accessible to all, cities need to upgrade their public infrastructure, starting with identifying areas that need the most improvement according to ADA guidelines. However, having city dispatchers travel and view every single area of a city is time consuming, expensive, and tedious. We aimed to utilize available data from Google Maps to streamline and automate the analysis of city areas for their compliance with ADA guidelines.
# What AcceCity does
AcceCity provides a machine learning-powered mapping platform that enables cities, urban planners, neighborhood associations, disability activists, and more to identify key areas to prioritize investment in. AcceCity identifies both problematic and up-to-standards spots and provides an interactive, dynamic map that enables on-demand regional mapping of accessibility concerns and improvements and street views of sites.
### Interactive dynamic map
AcceCity implements an interactive map, with city and satellite views, that enables on-demand mapping of accessibility concerns and improvements. Users can specify what regions they want to analyze, and a street view enables viewing of specific spots.
### Detailed accessibility concerns
AcceCity calculuates scores for each concern based on ADA standards in four categories: general accessibility, walkability, mobility, and parking. Examples of the features we used for each of these categories include the detection of ramps in front of raised entrances, the presence of sidewalks along roads, crosswalk markings at street intersections, and the number of handicap-reserved parking spots in parking lots. In addition, suggestions for possible solutions or improvements are provided for each concern.
### Accessibility scores
AcceCity auto-generates metrics for areas by computing regional scores (based on the scan area selected by the user) by category (general accessibility, walkability, mobility, and parking) in addition to an overall composite score.
# How we built it
### Frontend
We built the frontend using React with TailwindCSS for styling. The interactive dynamic map was implemented using the Google Maps API, and all map and site data are updated in real-time from Firebase using listeners.
New scan data are also instantly saved to the cloud for future reuse.
### Machine learning backend
First, we used the Google Maps API to send images of the street view to the backend. We looked for handicapped parking, sidewalks, disability ramps, and crosswalks and used computer vision, by custom-fitting a zero shot learning model called CLIP from OpenAI, to automatically detect those objects from the images. We tested the model using labeled data from Scale Rapid API.
After running this endpoint on all images in a region of interest, users can calculate a metric that represents the accessibility of that area to people with disabilities. We call that metric the ADA score, which can be good, average, or poor. (Regions with a poor ADA score should be specifically targeted by city planners to increase its accessibility.) We calculated this ADA score based on features such as the number of detected ramps, handicapped parking spaces, crosswalks, and sidewalks from the google maps image analysis discussed previously, in addition to using the number of accidents per year recorded in that area. We trained a proof of concept model using mage.ai, which provides an intuitive and high-level way to train custom models.
## Challenges we ran into
* Applying ML to diverse urban images, especially since it’s so “in the wild”
* Lack of general ML models for accessibility prediction
* Developing methods for calculating representative / accurate metrics
* Running ML model on laptops: very computationally expensive
## Accomplishments that we're proud of
* We developed the first framework that connects Google Maps images with computer vision models to analyze the cities we live in.
* We developed the first computer vision framework/model aimed to detect objects specific for people with disabilities
* We integrated the Google Maps API with a responsive frontend that allows users to view their areas of interest and enter street view to see the results of the model.
## What we learned
* We learned how to integrate the Google Maps API for different purposes.
* We learned how to customize the OpenAI zero shot learning for specific tasks.
* How to use Scale Rapid API to label images
* How to use Mage.ai to quickly and efficiently train classification models.
## What's next for AcceCity
* Integrating more external data (open city data): public buildings, city zoning, locations of social services, etc.
* Training the machine learning models with more data collected in tandem with city officials.
## Ethical considerations
As we develop technology made to enable and equalize the playing field for all people, it is important for us to benchmark our efforts against sustainable and ethical products. Accecity was developed with several ethical considerations in mind to address a potentially murky future at the intersection of everyday life (especially within our civilian infrastructure) and digital technology.
A primary lens we used to assist in our data collection and model training efforts was ensuring that we collected data points from a spectrum of different fields. We attempted to incorporate demographic, socioeconomic, and geopolitical diversity when developing our models to detect violations of the ADA. This is key, as studies have shown that ADA violations disproportionately affect socioeconomically disadvantaged groups, especially among Black and brown minorities.
By incorporating a diverse spectrum of information into our analysis, our outputs can also better serve the city and urban planners seeking to create more equitable access to cities for persons with disabilities and improve general walkability metrics.
At its core, AcceCity is meant to help urban planners design better cities. However, given the nature of our technology, it casts a wide, automatic net over certain regions. The voice of the end population is never heard, as all of our suggestion points are generated via Google Maps. In future iterations of our product, we would focus on implementing features that allow everyday civilians affected by ADA violations and lack of walkability to suggest changes to their cities or report concerns. People would have more trust in our product if they believe and see that it is truly creating a better city and neighborhood around them.
As we develop a technology that might revolutionize how cities approach urban planning and infrastructure budget, it is also important to consider how bad actors might aim to abuse our platform. The first and primary red flag is from the stance of someone who might abuse disability and reserved parking and actively seeks out those reserved spaces, when they have not applied for a disability placard, excluding those who need those spaces the most. Additionally, malicious actors might use the platform to scrape data on cities and general urban accessibility features and sell that data to firms that would want these kinds of metrics, which is why we firmly commit to securing our and never selling our data to third parties.
One final consideration for our product is its end goal: to help cities become more accessible for all. Once we achieve this goal, even on an individual concern by concern basis we should come back to cities and urban planners with information on the status of their improvements and more details on other places that they can attempt to create more equitable infrastructure.
|
## Inspiration
We were inspired by our shared love of dance. We knew we wanted to do a hardware hack in the healthcare and accessibility spaces, but we weren't sure of the specifics. While we were talking, we mentioned how we enjoyed dance, and the campus DDR machine was brought up. We decided to incorporate that into our hardware hack with this handheld DDR mat!
## What it does
The device is oriented so that there are LEDs and buttons that are in specified directions (i.e. left, right, top, bottom) and the user plays a song they enjoy next to the sound sensor that activates the game. The LEDs are activated randomly to the beat of the song and the user must click the button next to the lit LED.
## How we built it
The team prototyped the device for the Arduino UNO with the initial intention of using a sound sensor as the focal point and slowly building around it, adding features where need be. The team was only able to add three features to the device due to the limited time span of the event. The first feature the team attempted to add was LEDs that reacted to the sound sensor, so it would activate LEDs to the beat of a song. The second feature the team attempted to add was a joystick, however, the team soon realized that the joystick was very sensitive and it was difficult to calibrate. It was then replaced by buttons that operated much better and provided accessible feedback for the device. The last feature was an algorithm that added a factor of randomness to LEDs to maximize the "game" aspect.
## Challenges we ran into
There was definitely no shortage of errors while working on this project. Working with the hardware on hand was difficult, the team was nonplussed whether the issue on hand stemmed from the hardware or an error within the code.
## Accomplishments that we're proud of
The success of the aforementioned algorithm along with the sound sensor provided a very educational experience for the team. Calibrating the sound sensor and developing the functional prototype gave the team the opportunity to utilize prior knowledge and exercise skills.
## What we learned
The team learned how to work within a fast-paced environment and experienced working with Arduino IDE for the first time. A lot of research was dedicated to building the circuit and writing the code to make the device fully functional. Time was also wasted on the joystick due to the fact the values outputted by the joystick did not align with the one given by the datasheet. The team learned the importance of looking at recorded values instead of blindly following the datasheet.
## What's next for Happy Fingers
The next steps for the team are to develop the device further. With the extra time, the joystick method could be developed and used as a viable component. Working on delay on the LED is another aspect, doing client research to determine optimal timing for the game. To refine the game, the team is also thinking of adding a scoring system that allows the player to track their progress through the device recording how many times they clicked the LED at the correct time as well as a buzzer to notify the player they had clicked the incorrect button. Finally, in a true arcade fashion, a display that showed the high score and the player's current score could be added.
|
winning
|
## Inspiration
The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates.
## What it does
Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies.
## How we built it
1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises.
2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG).
3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model.
4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations.
## Challenges we ran into
1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge.
2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure.
3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform.
## Accomplishments that we're proud of
1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations.
2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform.
3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts.
## What we learned
1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources.
2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately.
3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs.
## What's next for DisasterAid.ai
1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates.
2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses.
3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface.
4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai.
5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
|
## Inspiration
I got this idea because of the current hurricane Milton causing devastation across Florida.
The inspiration behind *Autonomous AI Society* stems from the need for faster, more efficient, and autonomous systems that can make critical decisions during disaster situations. With multiple sponsors like Fetch.ai, Groq, Deepgram, Hyperbolic, and Vapi providing powerful tools, I envisioned an intelligent system of AI agents capable of handling a disaster response chain—from analyzing distress calls to dispatching drones and contacting rescue teams. The goal was to build an AI-driven solution that can streamline emergency responses, save lives, and minimize risks.
## What it does
*Autonomous AI Society* is a fully autonomous multi-agent system that performs disaster response tasks in the following workflow:
1. **Distress Call Analysis**: The system first analyzes distress calls using Deepgram for speech-to-text and Hume AI to score distress levels. Based on the analysis, the agent identifies the most urgent calls and the city.
2. **Drone Dispatch**: The distress analyzer agent communicates with the drone agent (built using Fetch.ai) to dispatch drones to specific locations, assisting with flood and rescue operations.
3. **Human Detection**: Drones capture aerial images, which are analyzed by the human detection agent using Hyperbolic's LLaMA Vision model to detect humans in distress. The agent provides a description and coordinates.
4. **Priority-Based Action**: The drone results are displayed on a dashboard, ranked based on priority using Groq. Higher priority areas receive faster dispatches, and this is determined dynamically.
5. **Rescue Call**: The final agent, built using Vapi, places an emergency call to the rescue team. It uses instructions generated by Hyperbolic’s text model to give precise directions based on the detected individuals and their location.
## How I built it
The system consists of five agents, all built using **Fetch.ai**’s framework, allowing them to interact autonomously and make real-time decisions:
* **Request-sender agent** sends the initial requests.
* **Distress analyzer agent** uses **Hume AI** to analyze calls and **Groq** to generate dramatic messages.
* **Drone agent** dispatches drones to designated areas based on the distress score.
* **Human detection agent** uses **Hyperbolic’s LLaMA Vision** to process images and detect humans in danger.
* **Call rescue agent** sends audio instructions using **Deepgram**’s TTS and **Vapi** for automated phone calls.
## Challenges I ran into
* **Simulating a drone movement on florida map**: The lat\_lon\_to\_pixel function converts latitude and longitude coordinates to pixel positions on the screen. The drone starts at the center of Florida.
Its movement is calculated using trigonometry. The angle to the target city is calculated using math.atan2.
The drone moves towards the target using sin and cos functions.This allows placing cities and the drone accurately on the map.
* **Callibrating the map to right coordinates**: I had manually experiment with increasing and decreasing the coordinates to fit them at right spots on the florida map.
* **Coordinating AI agents**: Getting agents to communicate effectively while working autonomously was a challenge.
* **Handling dynamic priorities**: Ensuring real-time analysis and updating the priority of drone dispatch based on Groq's risk assessment was tricky.
* **Integration of multiple APIs**: Each sponsor's tools had specific nuances, and integrating all of them smoothly, especially with Fetch.ai, required careful handling.
## Accomplishments that I am proud of
* Successfully built an end-to-end autonomous system where AI agents can make intelligent decisions during a disaster, from distress call analysis to rescue actions.
* Integrated cutting-edge technologies like **Fetch.ai**, **Groq**, **Hyperbolic**, **Deepgram**, and **Vapi** in a single project to create a highly functional and real-time response system.
## What I learned
* **AI for disaster response**: Building systems that leverage multimodal AI agents can significantly improve response times and decision-making in life-critical scenarios.
* **Cross-platform integration**: We learned how to seamlessly integrate various tools, from vision AI to TTS to drone dispatch, using **Fetch.ai** and sponsor technologies.
* **Working with real-time data**: Developing an autonomous system that processes data in real-time provided insights into handling complex workflows.
## What's next for Autonomous AI Society
* **Scaling to more disasters**: Expanding the system to handle other types of natural disasters like wildfires or earthquakes.
* **Edge deployment**: Enabling drones and agents to run on the edge to reduce response times further.
* **Improved human detection**: Enhancing human detection with more precise models to handle low-light or difficult visual conditions.
* **Expanded rescue communication**: Integrating real-time communication with the victims themselves using Deepgram’s speech technology.
|
## Inspiration
Our inspiration for the disaster management project came from living in the Bay Area, where earthquakes and wildfires are constant threats. Last semester, we experienced a 5.1 magnitude earthquake during class, which left us feeling vulnerable and unprepared. This incident made us realize the lack of a comprehensive disaster management plan for our school and community. We decided to take action and develop a project on disaster management to better prepare ourselves for future disasters.
## What it does
Our application serves as a valuable tool to help manage chaos during disasters such as earthquakes and fire. With features such as family search, location sharing, searching for family members, an AI chatbot for first aid, and the ability to donate to affected individuals and communities, our app can be a lifeline for those affected by a crisis.
## How we built it
Our disaster management application was built with Flutter for the Android UI, Dialogflow for the AI chat assistant, and Firebase for the database. The image face similarity API was implemented using OpenCV in Django REST.
## Challenges we ran into
We are proud of the fact that, as first-time participants in a hackathon, we were able to learn and implement a range of new technologies within a 36-hour time frame.
## Accomplishments that we're proud of
* Our disaster management application has a valuable feature that allows users to search for their family members during a crisis. By using an image similarity algorithm API (OpenCV), users can enter the name of a family member and get information about their recent location. This helps to ensure the safety of loved ones during a disaster, and can help identify people who are injured or unconscious in hospitals. The image is uploaded to Firebase, and the algorithm searches the entire database for a match. We're proud of this feature, and will continue to refine it and add new technologies to the application.
## What we learned
We were not able to implement the live location sharing feature due to time constraints, but we hope to add it in the future as we believe it could be valuable in emergency situations.
## What's next for -
We plan to improve our AI chatbot, implement an adaptive UI for responders, and add text alerts to the application in the future.
|
partial
|
## Inspiration
As some airplanes adopt self-driving systems, some will remain manually controlled. Since all aircraft cannot adopt synchronized self-driving systems simultaneously, we need software to help us transition into this new technology to prevent accidents during taxi. Aircraft Marshalls direct aircraft with hand motions, so we used computer vision to translate these signals into maneuvering instructions.
## What it does
AeroVision lets you control a VIAM rover with just simple hand gestures. It can move forward, move backward, turn right, turn left, and stop. Just like Aircraft Marshalls, it uses standard marshalling signals and converts them to robot output/movement.
## How we built it
We used VIAM's app, API, and Python SDK to make a VIAM rover respond to hand signals. Then, we used an OpenCV model to track our hands on the webcam by each frame. Using the hand tracking model, we can make automated decisions for the VIAM rover to move using its wheels.
## Challenges we ran into
We ran into many challenges during this hackathon, one including the VIAM rover. Since we were new to their system and network for running their machine, we had to adapt to their API and hardware. However, the VIAM team was able to help us every step of the way.
## Accomplishments that we're proud of
Being able to integrate OpenCV with the VIAM rover
## What we learned
How to use the VIAM rover, and implementing their API using the python SDK.
## What's next for AeroVision
Better and more accurate tracking, more features with the OpenCV model. We are thinking about implementing this for Aerovision Pro Max Deluxe.
|
## Inspiration
An easy way to get paid when you buy groceries for your roommates. No need to download extra apps, and can be done anywhere.
## What it does
A messenger bot that takes an image of a receipt as input, and prompts everyone in your group to decide on if they want to split the bill with you.
Transfer of money is done through [Capital One Nessie API](http://api.reimaginebanking.com/).
|
## Inspiration
Every few days, a new video of a belligerent customer refusing to wear a mask goes viral across the internet. On neighborhood platforms such as NextDoor and local Facebook groups, neighbors often recount their sightings of the mask-less minority. When visiting stores today, we must always remain vigilant if we wish to avoid finding ourselves embroiled in a firsthand encounter. With the mask-less on the loose, it’s no wonder that the rest of us have chosen to minimize our time spent outside the sanctuary of our own homes.
For anti-maskers, words on a sign are merely suggestions—for they are special and deserve special treatment. But what can’t even the most special of special folks blow past? Locks.
Locks are cold and indiscriminate, providing access to only those who pass a test. Normally, this test is a password or a key, but what if instead we tested for respect for the rule of law and order? Maskif.ai does this by requiring masks as the token for entry.
## What it does
Maskif.ai allows users to transform old phones into intelligent security cameras. Our app continuously monitors approaching patrons and uses computer vision to detect whether they are wearing masks. When a mask-less person approaches, our system automatically triggers a compatible smart lock.
This system requires no human intervention to function, saving employees and business owners the tedious and at times hopeless task of arguing with an anti-masker.
Maskif.ai provides reassurance to staff and customers alike with the promise that everyone let inside is willing to abide by safety rules. In doing so, we hope to rebuild community trust and encourage consumer activity among those respectful of the rules.
## How we built it
We use Swift to write this iOS application, leveraging AVFoundation to provide recording functionality and Socket.io to deliver data to our backend. Our backend was built using Flask and leveraged Keras to train a mask classifier.
## What's next for Maskif.ai
While members of the public are typically discouraged from calling the police about mask-wearing, businesses are typically able to take action against someone causing a disturbance. As an additional deterrent to these people, Maskif.ai can be improved by providing the ability for staff to call the police.
|
partial
|
## Inspiration
When you're visiting an unfamiliar place, it's currently very clunky to acclimate yourself with your new surroundings. Although you have information in the palm of your hands, you don't necessarily want to keep pulling your phone out of your pocket and manually search for restaurants, gas stations, and other points of interest. We see Hololens as a stepping stone towards more natural location services, by allowing the user to access information in the blink of an eye, rather than in the palm of your hands.
## What it does
HoloScene queries for you location from your mobile device and populates your surrounding field-of-view with points of interest. This allows you to view nearby restaurants, landmarks, and more right in your own Hololens device. At each point of interest (POI) waypoint, you can also tap to get more information, such as the name of the landmark, a brief description, a related image, and Yelp review if applicable. You can also move around in the augmented reality world, allowing you to approach and interact with waypoints in 3D space, naturally, as if they were real objects.
## How I built it
We used Unity and Visual C# to build the interface and infrastructure for the Hololens, Android Studio and Bluetooth Low Energy beacon technology for the location data transfer to the Hololens, and Bing Maps Spatial Data Services to query for nearby landmarks and POIs. The objects in Unity are dynamically generated and scripted based on queried JSON GIS data.
## Challenges I ran into
Unity and the Bluetooth interface had steep learning curves; Although we experienced many exciting moments of success when developing with Unity, ultimately our inexperience slowed our development process.
## Accomplishments that I'm proud of
We're very proud of the fact that we got dynamic objects to render and interact with in Unity, considering we had started learning development the same day, as well as finishing the Bluetooth Low Energy beacon to transfer otherwise non-existant location data from our Android device to the Hololens. Initial research indicated that it was nigh impossible, but we found out how to do it in the end! :)
## What I learned
We all learned a lot about Unity development and Bluetooth technologies. The scripting interface for Unity was a definite acquired skill from this project!
## What's next for HoloScene
Originally, we intended to access routing and directions data and augment that to the user, but we decided it was too much to learn in one day. Hopefully in the future we can bring this feature to light as well!
|
## Inspiration
We wanted to create a creative HoloLens experience that truly transformed your space and motivated the user to interact in fun, innovative (and silly!) ways. Re-imagining simple classics seemed like a good place to start, and our redesign of Snake turned out to be more engaging than it had any right to be (:
## What it does
Upon starting the game, the user is prompted to scan their space. Using the Hololens's Spatial Mapping sensors and some scripts that we wrote, we were able to get a full understanding of the user's space and automatically create a custom play area specific to your surroundings by analyzing the normals of the spatial mesh with raycasts and calculating which areas of the room find themselves empty.
After scanning and generating the playspace, the user can play the game. Users must use their head to collect CyberCubes™ while at the same time avoiding the ever-growing CyberTail™ that follows them. Other special pickups are also available, like the CyberMotivationalVortex™, which attracts all of the surrounding cubes into a single point in space if you say a motivational quote (and explodes, transforming the colors of the space completely), and the CyberGravityPull™, which can help you get out of sticky situations by dropping all of the CyberTail™ spheres on the ground for a few seconds.
The game also has a number of easter eggs and voice commands that can be used to enhance your CyberExperience™. Try saying "Samuel Jackson", for instance. Bonus points for whoever discovers the others.
## How I built it
Unity, Hololens, C#, coding, caffeine, sheer will.
## Challenges I ran into
Discovering meaningful interactions for the HoloLens is always a challenge given its limited input. Because the documentation on HoloLens development (specially with things like SpatialMapping) is so limited, we also had to develop a lot of our own technology to get the desired final result.
## Accomplishments that I'm proud of
It looks polished and it's very fun to play - we also got to design a lot of our own sound effects, assets, easter eggs and interactions. The gameplay loop is simple but has depth.
## What I learned
Mixed Reality is TheFuture™, a bunch of Unity and HoloLens development tricks, sound design discoveries and also what are the things that make a HoloLens interaction fun
## What's next for Cyber Snake
More polish, create a story-mode and post it in the Microsoft Store for others to enjoy.
|
## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine.
|
losing
|
Duet's music generation revolutionizes how we approach music therapy. We capture real-time brainwave data using Emotiv EEG technology, translating it into dynamic, personalized soundscapes live. Our platform, backed by machine learning, classifies emotional states and generates adaptive music that evolves with your mind. We are all intrinsically creative, but some—whether language or developmental barriers—struggle to convey it. We’re not just creating music; we’re using the intersection of art, neuroscience, and technology to let your inner mind shine.
## About the project
**Inspiration**
Duet revolutionizes the way children with developmental disabilities—approximately 1 in 10 in the United States—express their creativity through music by harnessing EEG technology to translate brainwaves into personalized musical experiences.
Daniel and Justin have extensive experience teaching music to children, but working with those who have developmental disabilities presents unique challenges:
1. Identifying and adapting resources for non-verbal and special needs students.
2. Integrating music therapy principles into lessons to foster creativity.
3. Encouraging improvisation to facilitate emotional expression.
4. Navigating the complexities of individual accessibility needs.
Unfortunately, many children are left without the tools they need to communicate and express themselves creatively. That's where Duet comes in. By utilizing EEG technology, we aim to transform the way these children interact with music, giving them a voice and a means to share their feelings.
At Duet, we are committed to making music an inclusive experience for all, ensuring that every child—and anyone who struggles to express themselves—has the opportunity to convey their true creative self!
**What it does:**
1. Wear an EEG
2. Experience your brain waves as music! Focus and relaxation levels will change how fast/exciting vs. slow/relaxing the music is.
**How we built it:**
We started off by experimenting with Emotiv’s EEGs — devices that feed a stream of brain wave activity in real time! After trying it out on ourselves, the CalHacks stuffed bear, and the Ariana Grande cutout in the movie theater, we dove into coding. We built the backend in Python, leveraging the Cortex library that allowed us to communicate with the EEGs. For our database, we decided on SingleStore for its low latency, real-time applications, since our goal was to ultimately be able to process and display the brain wave information live on our frontend.
Traditional live music is done procedurally, with rules manually fixed by the developer to decide what to generate. On the other hand, existing AI music generators often generate sounds through diffusion-like models and pre-set prompts. However, we wanted to take a completely new approach — what if we could have an AI be a live “composer”, where it decided based on the previous few seconds of live emotional data, a list of available instruments it can select to “play”, and what it previously generated to compose the next few seconds of music? This way, we could have live AI music generation (which, to our knowledge, does not exist yet). Powered by Google’s Gemini LLM, we crafted a prompt that would do just that — and it turned out to be not too shabby!
To play our AI-generated scores live, we used Sonic Pi, a Ruby-based library that specializes in live music generation (think DJing in code). We fed this and our brain wave data to a frontend built in Next.js to display the brain waves from the EEG and sound spectrum from our audio that highlight the correlation between them.
**Challenges:**
Our biggest challenge was coming up with a way to generate live music with AI. We originally thought it was impossible and that the tech wasn’t “there” yet — we couldn’t find anything online about it, and even spent hours thinking about how to pivot to another idea that we could use our EEGs with.
However, we eventually pushed through and came up with a completely new method of doing live AI music generation that, to our knowledge, doesn’t exist anywhere else! It was most of our first times working with this type of hardware, and we ran into many issues with getting it to connect properly to our computers — but in the end, we got everything to run smoothly, so it was a huge feat for us to make it all work!
**What’s next for Duet?**
Music therapy is on the rise – and Duet aims to harness this momentum by integrating EEG technology to facilitate emotional expression through music. With a projected growth rate of 15.59% in the music therapy sector, our mission is to empower kids and individuals through personalized musical experiences. We plan to implement our programs in schools across the states, providing students with a unique platform to express their emotions creatively. By partnering with EEG companies, we’ll ensure access to the latest technology, enhancing the therapeutic impact of our programs. Duet gives everyone a voice to express emotions and ideas that transcend words, and we are committed to making this future a reality!
**Built with:**
* Emotiv EEG headset
* SingleStore real-time database
* Python
* Google Gemini
* Sonic Pi (Ruby library)
* Next.js
|
## Inspiration
We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD.
## What it does
The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include:
* An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up.
* If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count.
* Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered.
* Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text.
* Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems.
## How we built it
We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**.
## Accomplishments that we're proud of
We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component.
## What's next for HealthHunt AR
In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content.
|
## Inspiration
Autism is the fastest growing developmental disorder worldwide – preventing 3 million individuals worldwide from reaching their full potential and making the most of their lives. Children with autism often lack crucial communication and social skills, such as recognizing emotions and facial expressions in order to empathize with those around them.
The current gold-standard for emotion recognition therapy is applied behavioral analysis (ABA), which uses positive reinforcement techniques such as cartoon flashcards to teach children to recognize different emotions. However, ABA therapy is often a boring process for autistic children, and the cartoonish nature of the flashcards doesn't fully capture the complexity of human emotion communicated through real facial expressions, tone of voice, and body language.
## What it does
Our solution is KidsEmote – a fun, interactive mobile app that leverages augmented reality and deep learning to help autistic children understand emotions from facial expressions. Children hold up the phone to another person's face – whether its their parents, siblings, or therapists – and cutting-edge deep learning algorithms identify the face's emotion as one of joy, sorrow, happiness, or surprise. Then, four friendly augmented reality emojis pop up as choices for the child to choose from. Selecting the emoji correctly matching the real-world face creates a shower of stars and apples in AR, and a score counter helps gamify the process to encourage children to keep on playing to get better at recognizing emotions.
The interactive nature of KidsEmote helps makes therapy seem like nothing more than play, increasing the rate at which they improve their social abilities. Furthermore, compared to cartoon faces, the real facial expressions that children with autism recognize in KidsEmote are exactly the same as the expressions they'll face in real life – giving them greater security and confidence to engage with others in social contexts.
## How we built it
KidsEmote is built on top of iOS in Swift, and all augmented reality objects were generated through ARKit, which provided easy to use physics and object manipulation capabilities. The deep learning emotion classification on the backend was conducted through the Google Cloud Vision API, and 3D models were generated through Blender and also downloaded from Sketchfab and Turbosquid.
## Challenges we ran into
Since it was our first time working with ARKit and mobile development, learning the ins and outs of Swift as well as created augmented reality objects was truly and eye-opening experience. Also, since the backend API calls to the Vision API call were asynchronous, we had to carefully plan and track the flow of inputs (i.e. taps) and outputs for our app. Also, finding suitable 3D models for our app also required much work – most online models that we found were quite costly, and as a result we ultimately generated our own 3D face expression emoji models with Blender.
## Accomplishments that we're proud of
Building a fully functional app, working with Swift and ARKit for the first time, successfully integrating the Vision API into our mobile backend, and using Blender for the first time!
## What we learned
ARKit, Swift, physics for augmented reality, and using 3D modeling software. We also learned how to tailor the user experience of our software specifically to our audience to make it as usable and intuitive as possible. For instance, we focused on minimizing the amount of text and making sure all taps would function as expected inside our app.
## What's next for KidsEmote
KidsEmote represents a complete digital paradigm shift in the way autistic children are treated. While much progress has been made in the past 36 hours, KidsEmote opens up so many more ways to equip children with autism with the necessary interpersonal skills to thrive in social situations. For instance, KidsEmote can be easily extended to help autistic children recognize between different emotions from the tone of one's voice, and understand another's mood based on their body gesture. Integration between all these various modalities only yields more avenues for exploration further down the line. In the future, we also plan on incorporating video streaming abilities into KidsEmote to enable autistic children from all over the world to play with each other and meet new friends. This would greatly facilitate social interaction on an unprecedented scale between children with autism since they might not have the opportunity to do so in otherwise in traditional social contexts. Lastly, therapists can also instruct parents to KidsEmote as an at-home tool to track the progress of their children – helping parents become part of the process and truly understand how their kids are improving first-hand.
|
winning
|
## 💡 Inspiration
The objective of our application is to devise an effective and efficient written transmission optimization scheme, by converting esoteric text into an exoteric format.
If you read the above sentence more than once and the word ‘huh?’ came to mind, then you got my point. Jargon causes a problem when you are talking to someone who doesn't understand it. Yet, we face obscure, vague texts every day - from ['text speak'](https://www.goodnewsnetwork.org/dad-admits-hilarious-texting-blunder-on-the-moth/) to T&C agreements.
The most notoriously difficult to understand texts are legal documents, such as contracts or deeds. However, making legal language more straightforward would help people understand their rights better, be less susceptible to being punished or not being able to benefit from their entitled rights.
Introducing simpl.ai - A website application that uses NLP and Artificial Intelligence to recognize difficult to understand text and rephrase them with easy-to-understand language!
## 🔍 What it does
simpl.ai intelligently simplifies difficult text for faster comprehension. Users can send a PDF file of the document they are struggling to understand. They can select the exact sentences that are hard to read, and our NLP-model recognizes what elements make it tough. You'll love simpl.ai's clear, straightforward restatements - they change to match the original word or phrase's part of speech/verb tense/form, so they make sense!
## ⚙️ Our Tech Stack
[](https://postimg.cc/gr2ZqkpW)
**Frontend:** We created the client side of our web app using React.js and JSX based on a high-fidelity prototype we created using Figma. Our components are styled using MaterialUI Library, and Intelllex's react-pdf package for rendering PDF documents within the app.
**Backend:** Python! The magic behind the scenes is powered by a combination of fastAPI, TensorFlow (TF), Torch and Cohere. Although we are newbies to the world of AI (NLP), we used a BART model and TF to create a working model that detects difficult-to-understand text! We used the following [dataset](https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/complex-word-identification-dataset/cwishareddataset.zip) from Stanford University to train our [model](http://nlp.stanford.edu/data/glove.6B.zip)- It's based on several interviews conducted with non-native English speakers, where they were tasked to identify difficult words and simpler synonyms for them. Finally, we used Cohere to rephrase the sentence and ensure it makes sense!
## 🚧 Challenges we ran into
This hackathon was filled with many challenges - but here are some of the most notable ones:
* We purposely choose an AI area where we didn't know too much in (NLP, TensorFlow, CohereAPI), which was a challenging and humbling experience. We faced several compatibility issues with TensorFlow when trying to deploy the server. We decided to go with AWS Platform after a couple of hours of trying to figure out Kubernetes 😅
* Finding a dataset that suited our needs! If there were no time constraints, we would have loved to develop a dataset that is more focused on addressing tacky legal and technical language. Since that was not the case, we made do with a database that enabled us to produce a proof-of-concept.
## ✔️ Accomplishments that we're proud of
* Creating a fully-functioning app with bi-directional communication between the AI server and the client.
* Working with NLP, despite having no prior experience or knowledge. The learning curve was immense!
* Able to come together as a team and move forward, despite all the challenges we faced together!
## 📚 What we learned
We learned so much in terms of the technical areas; using machine learning and having to pivot from one software to the other, state management and PDF rendering in React.
## 🔭 What's next for simpl.ai!
**1. Support Multilingual Documents.** The ability to translate documents and provide a simplified version in their desired language. We would use [IBM Watson's Language Translator API](https://cloud.ibm.com/apidocs/language-translator?code=node)
**2. URL Parameter** Currently, we are able to simplify text from a PDF, but we would like to be able to do the same for websites.
* Simplify legal jargon in T&C agreements to better understand what permissions and rights they are giving an application!
* We hope to extend this service as a Chrome Extension for easier access to the users.
**3. Relevant Datasets** We would like to expand our current model's capabilities to better understand legal jargon, technical documentation etc. by feeding it keywords in these areas.
|
## Inspiration
Elementary school kids are very savvy with searching via Google, and while sometimes the content returned are relevant, they may not be at a suitable reading level when the first search results talks about something like phytochemicals or pharmacology. Is there a way to assess whether links in a search result are at the level users desire to read?
That's why we created Readabl.
Readability is about the reader, and different personas will have their own perspective on how readability metrics can help them. Our vision is to enable users to find content suitable for their needs and help make content accessible to everyone.
## What it does
Readabl offers search results along with readability metrics so that users can at a glance see what search results are suitable for them to read.
## How we built it
The entire application is hosted in a monorepo consisting of a Javascript frontend framework - Svelte with a FastAPI backend endpoint. The frontend is hosted on Netlify while the backend is hosted using GCP's Cloud Run. The search and processing that takes place in the backend is built using both Google Cloud Custom Search JSON API and the py-readability-metrics library.
### Backend
Hosted on GCP's Cloud Run using Docker, we are using FastAPI to communicate with our frontend to get user's search term and rank the information according back to the users. The FastAPI talks to Google Search API, retrieving information and passing it along. Before passing to the frontend, we parsed the information using a Python Library - BeautifulSoup - to get the text on the particular page to be ranked for readability. We also explored concurrent programming via Python in the backend so that we can parse multiple webpages in parallel to speed up the processing.
backend -> <https://api.readabl.tech/>
### Frontend
The frontend uses the Svelte framework as the main driver due to it's fast run time and minimalistic structure with little boilerplates code. We explored using a UI framework to speed up the development workflow but a lot of the existing UI frameworks suits the projects due to limited functionality and poor documentation.
frontend -> <https://readto.beabetterhuman.tech/>
## Challenges we ran into
We explored multiple new technologies during this hackathon. Since we are all new to the technology we used, we faced a lot of steep learning curve and issue revolving around navigating GCP:
* back end processing takes a lot longer and times out the search results when there is too much to parse because of the content submitted (e.g. philosophical questions). We are also limited by Google API to be able to request only 10 links per search hence we needed to do this recursively which added on to the processing time
* couldn't redeem MLH GCP credits
* lack of knowledge of Svelte.js framework
* lack of UI libraries to speed up development time
* GCP's Cloud Run deployment blocked due to Python requirements versioning
* deployment on Netlify and setting up custom domains
* constantly having Git merge conflicts
## Accomplishments that we're proud of
We made a working search engine! We learned a ton about development with GCP and deployment using cloud technologies !
Each of us was able to challenge ourselves by working with new tools and APIs. Moreover, we have been very supportive and helpful to each other by assisting them to the best of our knowledge. In the end, the team has made a functional product with most of the features we have envisioned from the start, and we bring home new knowledge, as well as new tools to explore later on. We knew we took on an ambitious project and we are really proud of what we were able to achieve in this hackathon.
## What we learned
We have integrated and tried many APIs from various providers, which was a valuable learning experience. Solving conflicts helps us understand more thoroughly how things work behind the scene. In addition, as a team consisting of different skill sets and from different time zones, we learned how to communicate and teamwork effectively. We also learn how to help each other since each teammate had different varying of experience with certain tech stacks and applications. It was everyone's first experience working with Svelte and GCP services, so getting all the additional APIs while reducing the processing time on top of that was rather challenging.
Alas, we also learned a lot of accessibility and on leveraging cloud technology.
## What's next for Readabl
We plan to improve the search and ranking algorithm so we can improve on the performance. We also hope to build a community that contributes back and makes the world a bit easier to navigate at least readability wise. We are also searching for new datasets which include more information, such as scrolling speed information, color vision deficiencies information on webpages to implement a more inclusive search function.
# How to Contact Us
* {ben}#5927 - Benedict Neo
* ceruleanox#7402 - Anita Yip
* Pravallika#2768 - Pravallika Myneni
* weichun#3945 - Wei Chun
|
## **Inspiration:**
Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn.
## **What it does:**
Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations.
## **How we built it**:
We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders.
## **Challenges we ran into**:
Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning.
## **Accomplishments that we're proud of**:
Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly.
## **What we learned**:
We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
|
winning
|
## Inspiration
With the current global pandemic, gyms across Canada have been closed to the public's use, and as such has opened a need in the market for people to be able to properly workout and exercise. Since the upfront cost for a home gym is very high at the moment, not everyone has the ability to purchase adequate equipment, and so that is where QuickFit "fits" in, as it allows home gym owners to recoup some of the cost of purchasing their own equipment by renting time slots for use by individuals, and allows individuals to workout in an equipped gym for a small renting fee.
## What it does
QuickFit allows home gym owners to rent out time slots for people to workout in their gym, and allows users to search around their area for a home gym they want to workout in and book times. Similar to Airbnb, QuickFit handles the booking of time slots between users and owners, with an easy to use dashboard for each.
## How We built it
We built QuickFit with React, Redux, multi-page routing, and pulled in the Google Maps API for searching users local area for home gyms to rent out. We also used styled-components, @material-ui, and Formik to help with the UI components of the project.
## Challenges We ran into
This project went a lot smoother than we were anticipating. Two of our team members had never touched React before, but we've all been coding for a while, so learning on the fly came naturally. This project didn't have many "brick wall" challenges that stopped us in our tracks, but we consistently ran into small issues like getting design elements to work with each other or getting the google maps API to work the way we expected. The biggest challenge we ran into was a lack of time. We had a lot of ideas that we wanted to get into the project, and we had to keep cutting features to make sure we would have enough time to create the core functionality of QuickFit.
## Accomplishments that We're proud of
We're most proud of creating a finished product, which sounds super basic, but hear us out. Only one of us had experience with React, and even though the event is Friday to Sunday, we realistically only had about 15 hours to complete the project (life tends to get in the way when it's work from home). As such, we couldn't spend a ton of time trying to learn the basics of the tech stack we were using, and it was more of a hit the ground running sort of weekend. We're very happy with the fact that we were able to build an entire website, create functionality, and demo it all in such a short time, as it really goes to show how far we've come as Hackers and as software developers.
## What We learned
We learned a TON this weekend, but I've shortened it to a concise list for your viewing pleasure:
* Taking even an hour to whiteboard your idea and create a flow chart/UML diagram will pay dividends in time later when you're in the thick of things and stuff starts to get confusing. It's like having a map when you visit a new city.
* When learning something new, it's always good to have someone who's experienced to be a second pair of eyes since they will be a lot easier to bounce ideas off of than that 2 year old StackOverflow post.
* When none of your team members are designers, a minimalistic design is your best friend. Just make sure everything is consistent across the board, and you shouldn't run into any issues.
* It's great to be ambitious, and we really wanted to test our limits with this project, but with testing those limits actually comes the "limits" part. We learned that cutting features isn't the end of the world, especially if those features would have meant that we wouldn't have finished. Just gives us something to work on next weekend!
* Lastly, the tech. Everything in the "How We built it" section was learnt on the fly, and while we love our new found knowledge, it was like learning how to run before learning how to walk. While it worked this time, we know that we should (and will) put in the time to go over the basics to really strengthen our foundation of the technology, so that next time we can push ourselves even further, and create another awesome project!
## What's next for QuickFit
QuickFit needs a few tweaks before we'd consider it production ready. At the moment, we only have booking functionality, but no transaction is going through. As such, we liked to implement handling payments, and properly fleshing out the security of the app. As well, we attempted to host the app with our own domain using domain.com, but we ran into some early difficulties and decided to shelve the idea to allocate more time to creating the product. Finding a suitable hosting service, and linking to our custom domain would a next step. Lastly, cleaning up our code to make sure it uses best practices would definitely help with further development, and making the UI consistent (and prettier) would be a nice touch.
## Domain Name Registered with Domain.com: QuickFit.tech
## Discord Team #: 8
## Discord Team Names:
* Mikey#7441
* TheChudd#1564
* Neubist#7959
|
## Inspiration
All three of us come from extensive athletic backgrounds, where regular workouts are an **integral** part of our daily life. After talking to many people on campus and doing our research on beginning workout plans, we realized that *simply\_starting\_a\_regime* is the biggest hurdle when beginning to train. To combat this, we created an applet in the new You.com developer environment that creates a custom workout for the user to take to the gym, eliminating the heavy lifting before the heavy lifting.
## What it does
The applet takes in information about the muscle group you want to work, calls the exercise database to get a curated list of exercises, then enters a GPT prompt requesting a workout plan based on these specifications, which is then returned to the user.
## How we built it
Since You.com can only call from one API, **we had to build our own API from scratch** which included requests to both the exercise database API and OpenAI's GPT-3 API to take the muscle group the user wants to train, run a prompt request to OpenAI based on the exercises returned, and generate a usable, ready-to-go workout routine with instructions for any and all related exercises for the user to refer to in the workout.
## Challenges we ran into
There was a fairly large struggle in setting up the API initially--namely, just getting past Firebase's setup and hosting requirements. Once we were able to move beyond that, it was relatively simple to code in the You.com software and pull from our newly-created API. Additionally, it was a learning process fitting into the constraints of what the APIs we used. For example, we could only form the query for GPT-3 based on the data that the exercise API gave us, so we had to think critically about what our limitations were at each step. Towards the end, we spent **a lot of time on prompt engineering**, trying to figure out the best query to get a usable workout for the customer.
## Accomplishments that we're proud of
None of us have ever attended a hackathon before, so this was an environment that took some time to get accustomed to. We had a lot of ideas initially, overwhelmed by all of the avenues to explore and opportunities to dive into, so the fact that we latched onto one and spent a meaningful amount of time creating was something to take home for us. Additionally, oftentimes, it is easy to pounce on an idea which is maybe the quickest way to start but not the solution the group is most passionate about. The fact that we were able to find a problem near and dear to us as *athletes* was something we were really proud of because it made the entire process more enjoyable.
## What we learned
First and foremost, **asking for help is key.** It is rare to go into a hackathon and not use any resources that you have. As beginners, we were forced to go out of our comfort zone and ask for help from mentors more than we were accustomed to receiving help. Additionally, making an entire project from scratch in 36 hours or less is naturally going to come with ups and downs, so learning to embrace the struggle that comes with the process was super important. Lastly, humbling ourselves was a big factor. Because there are many smart minds present, most of whom have more hackathon experience than us, we were forced to return and remain in the student mindset and soak up as much information as we possibly could.
## What's next for WorkoutAI
As the ChatGPT API gets rolled out for public usage, utilizing that to further improve our curated workout plans is a near and clear next step. In addition to this, given more time, we would look to increase the user parameters in our model to make workouts even more personalized to each new customer embarking on their fitness journey. Beyond that, we are extremely passionate about overall health and fitness, so including a nutritional aspect would be a great addition to the feature, specifically by expanding the APIs we pull data from and utilizing LLMs to generate meal preparation, scrape recipes, or recommend nearby eateries.
|
## Inspiration
After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world.
## What it does
Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together.
Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location.
By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing.
## How We built it
The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down.
To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques.
Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
|
losing
|
## Inspiration
As avid readers, we wanted a tool to track our reading metrics. As a child, one of us struggled with concentrating and focusing while reading. Specifically, there was a strong tendency to zone out. Our app provides the ability for a user to track their reading metrics and also quantify their progress in improving their reading skills.
## What it does
By incorporating Ad Hawk’s eye-tracking hardware into our build, we’ve developed a reading performance tracker system that tracks and analyzes reading patterns and behaviours, presenting dynamic second-by-second updates delivered to your phone through our app.
These metrics are calculated through our linear algebraic models, then provided to our users in an elegant UI interface on their phones. We provide an opportunity to identify any areas of potential improvement in a user’s reading capabilities.
## How we built it
We used the Ad Hawk hardware and backend to record the eye movements. We used their Python SDK to collect and use the data in our mathematical models. From there, we outputted the data into our Flutter frontend which displays the metrics and data for the user to see.
## Challenges we ran into
Piping in data from Python to Flutter during runtime was slightly frustrating because of the latency issues we faced. Eventually, we decided to use the computer's own local server to accurately display and transfer the data.
## Accomplishments that we're proud of
Proud of our models to calculate the speed of reading, detection of page turns and other events that were recorded simply through changes of eye movement.
## What we learned
We learned that Software Development in teams is best done by communicating effectively and working together with the same final vision in mind. Along with this, we learned that it's extremely critical to plan out small details as well as broader ones to ensure plan execution occurs seamlessly.
## What's next for SeeHawk
We hope to add more metrics to our app, specifically adding a zone-out tracker which would record the number of times a user "zones out".
|
## Inspiration
Cliff is dyslexic, so reading is difficult and slow for him and makes school really difficult.
But, he loves books and listens to 100+ audiobooks/yr. However, most books don't have an audiobook, especially not textbooks for schools, and articles that are passed out in class. This is an issue not only for the 160M people in the developed world with dyslexia but also for the 250M people with low vision acuity.
After moving to the U.S. at age 13, Cliff also needed something to help him translate assignments he didn't understand in school.
Most people become nearsighted as they get older, but often don't have their glasses with them. This makes it hard to read forms when needed. Being able to listen instead of reading is a really effective solution here.
## What it does
Audiobook maker allows a user to scan a physical book with their phone to produced a digital copy that can be played as an audiobook instantaneously in whatever language they choose. It also lets you read the book with text at whatever size you like to help people who have low vision acuity or are missing their glasses.
## How we built it
In Swift and iOS using Google ML and a few clever algorithms we developed to produce high-quality scanning, and high quality reading with low processing time.
## Challenges we ran into
We had to redesign a lot of the features to make the app user experience flow well and to allow the processing to happen fast enough.
## Accomplishments that we're proud of
We reduced the time it took to scan a book by 15X after one design iteration and reduced the processing time it took to OCR (Optical Character Recognition) the book from an hour plus, to instantaneously using an algorithm we built.
We allow the user to have audiobooks on their phone, in multiple languages, that take up virtually no space on the phone.
## What we learned
How to work with Google ML, how to work around OCR processing time. How to suffer through git Xcode Storyboard merge conflicts, how to use Amazon's AWS/Alexa's machine learning platform.
## What's next for Audiobook Maker
Deployment and use across the world by people who have Dyslexia or Low vision acuity, who are learning a new language or who just don't have their reading glasses but still want to function. We envision our app being used primarily for education in schools - specifically schools that have low-income populations who can't afford to buy multiple of books or audiobooks in multiple languages and formats.
## Treehack themes
treehacks education Verticle > personalization > learning styles (build a learning platform, tailored to the learning styles of auditory learners) - I'm an auditory learner, I've dreamed a tool like this since the time I was 8 years old and struggling to learn to read. I'm so excited that now it exists and every student with dyslexia or a learning difference will have access to it.
treehacks education Verticle > personalization > multilingual education ( English as a second-language students often get overlooked, Are there ways to leverage technology to create more open, multilingual classrooms?) Our software allows any book to become polylingual.
treehacks education Verticle > accessibility > refugee education (What are ways technology can be used to bring content and make education accessible to refugees? How can we make the transition to education in a new country smoother?) - Make it so they can listen to material in their mother tongue if needed. or have a voice read along with them in English. Make it so that they can carry their books wherever they go by scanning a book once and then having it for life.
treehacks education Verticle >language & literacy > mobile apps for English literacy (How can you build mobile apps to increase English fluency and literacy amongst students and adults?) -One of the best ways to learn how to read is to listen to someone else doing it and to follow yourself. Audiobook maker lets you do that. From a practical perspective - learning how to read is hard and it is difficult for an adult learning a new language to achieve proficiency and a high reading speed. To bridge that gap Audiobook Maker makes sure that every person can and understand and learn from any text they encounter.
treehacks education Verticle >language & literacy > in-person learning (many people want to learn second languages) - Audiobook maker allows users to live in a foreign countrys and understand more of what is going on. It allows users to challenge themselves to read or listen to more of their daily work in the language they are trying to learn, and it can help users understand while they studying a foreign language in the case that the meaning of text in a book or elsewhere is not clear.
We worked a lot with Google ML and Amazon AWS.
|
## Inspiration
Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that.
## What it does
By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels.
## How we built it
On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database.
On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text.
## Challenges we ran into
Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated.
## Accomplishments that we're proud of
Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API.
## What we learned
We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered.
## What's next for Immersify
The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist.
|
winning
|
## Inspiration
We’re huge fans of Spotify, but we’ve always hoped there were more filters for our liked songs outside of just album, artist, and song title. What if we could instead filter by what mood, or “vibe” each song represents? Vibe's got you covered.
## What it does
Vibe, powered by Wolfram AI, analyzes your musical tastes and gives you a holistic overview of your top Spotify songs. It creates a custom playlist for you from your top songs, based on your current mood.
## How I built it
Vibe leverages the Spotify API and our own sentiment analysis to get the musical and lyrical attributes of each top song in your Spotify account. We then trained a machine learning classifier API using the Wolfram Platform (Wolfram One Instant API) to classify the "vibe" of a song according to its attributes. The training data for this classifier was obtained from publicly available Spotify playlists that were tagged with a specific mood.
For the frontend, both React and Bootstrap were used. For the backend, we used the Wolfram One platform for the classifier, while the sentiment analysis was built with a Python/Flask stack, the Genius API to get urls of song lyrics, BeautifulSoup4 to web scrape the lyrics, and vaderSentiment to carry out sentiment analysis.
## Challenges I ran into
This is the first time we've used flask and Wolfram, and it was interesting to learn about these new technologies while navigating through the difficulties.
## Accomplishments that I'm proud of
Using new technologies!
## What I learned
Sentiment analysis, wolfram, flask
## What's next for for Spotify
We hope to:
* improve our analysis/machine-learning metrics
* raise the accuracy of our model by introducing 10x more training data
* add more vibes
* build a similar app for Apple Music
|
Presentation Link: <https://docs.google.com/presentation/d/1_4Yy5c729_TXS8N55qw7Bi1yjCicuOIpnx2LxYniTlY/edit?usp=sharing>
SHORT DESCRIPTION (Cohere generated this)
Enjoy music from the good old days? your playlist will generate songs from your favourite year (e.g. 2010) and artist (e.g. Linkin Park)
## Inspiration
We all love listening to music on Spotify, but depending on the mood of the day, we want to listen to songs on different themes. Impressed by the cool natural language processing tech that Cohere offers, we decided to create Songify that uses Cohere to create Spotify playlists based on the user's request.
## What it does
The purpose of Songify is to make the process of discovering new music seamless and hopefully provide our users with some entertainment. The algorithm is not limited in search words so anything that Songify is prompted will generate a playlist whether it be for serious music interest or for laughs.
Songify uses a web based platform to collect user input which Cohere then scans and extracts keywords from. Cohere then sends those keywords to the Spotify API which looks for songs containing the data, creates a new playlist under the user's account and populates the songs into the playlist. Songify will then return a webpage with an embedded playlist where you can examine the songs that were added instantly.
## How we built it
The project revolved around 4 main tasks; Implementing the Spotify API, the Cohere API, creating a webpage and integrating our webpage and backend. Python was the language of choice since it supported the Spotify API, Cohere and Spotipy which extensively saved us time in learning to use Spotify's API. Our team then spent time learning about and executing our specific tasks and came together finally for the integration.
## Challenges we ran into
For most of our team, this was our first time working with Python, APIs and integrating front and back end code. Learning all these skills in the span of 3 days was extremely challenging and time consuming. The first hurdle that we had to overcome was learning to read API documentation. The documentation was very intimidating to look at and understanding the key concepts such as API keys, Authorizations, REST calls was very confusing at first. The learning process included watching countless YouTube videos, asking mentors and sponsors for help and hours of trial and error.
## Accomplishments that we're proud of
Although our project is not the most flashy, our team has a lot to be proud of. Creating a product with the limited knowledge we had and building an understanding of Python, APIs, integration and front end development in a tight time frame is an accomplishment to be celebrated. Our goal for this hackathon was to make a tangible product and we succeeded in that regard.
## What we learned
Working with Spotify's API provided a lot of insight on how companies store data and work with data. Through Songify, we learned that most Spotify information is stored in JSON objects spanning several hundred lines per song and several thousands for albums. Understanding the Authentication process was also a headache since it had many key details such as client ids, API keys, redirect addresses and scopes.
Flask was very challenging to tackle, since it was our first time dealing with virtual environments, extensive use of windows command prompt and new notations such as @app.route. Integrating Flask with our HTML skeleton and back end Python files was also difficult due to our limited knowledge in integration.
Hack Western was a very enriching experience for our team, exposing us to technologies we may not have encountered if not for this opportunity.
## What's next for HackWesternPlaylist
In the future, we hope to implement a search algorithm not only for names, but for artists, artist genres, and the ability to scrub other people's playlists that the user enjoys listening to. The appearance of our product is also suboptimal and cleaning up the front end of the site will make it more appealing to users. We believe that Songify has a lot of flexibility in terms of what it can grow into in the future and are excited to work on it in the future.
|
## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
|
partial
|
## Inspiration
This week a 16 year old girl went missing outside Oslo, in Norway. Her parents posted about it on Facebook, and it was quickly shared by thousands of people. An immense amount of comments scattered around a large amount of Facebook posts consisted of people trying to help, by offering to hang up posters, aid in the search and similar. A Facebook group was started, and grew to over 15 000 people within a day. The girl was found, and maybe a few of the contributions helped?
This is just one example, and similar events probably play out in a large number of countries and communities around the world. Even though Facebook is a really impressive tool for quickly sharing information like this across a huge network, it falls short on the other end - of letting people contribute to the search. Facebook groups are too linear, and has few tools that aid in making this as streamlined as possible. The idea is to create a platform that covers this.
## What it does
Crowd Search is split into two main parts:
* The first part displays structured information about the case, letting people quickly get a grasp of the situation at hand. It makes good use of rich media and UX design, and presents the data in an understandable way.
* The second part is geared around collaboration between volunteers. It allows the moderators of the missing person search to post information, updates and tasks that people can perform to contribute towards.
## How we built it
Crowd Search makes heavy use of Firebase, and is because of this a completely front-end based application, hosted on Firebase Hosting. The application itself is built using React.
By using Firebase our application syncs updates in realtime, whether it's comments, new posts, or something as a simple as a task list checkbox. Firebase also lets us easily define a series of permission rules, to make sure that only authorized moderators and admins can change existing data and similar. Authentication is done using Facebook, through Firebase's authentication provider.
To make development as smooth as possible we make use of a series of utilities:
* We compile our JavaScript files with Babel, which lets us use new ECMAScript 2016+ features.
* We quality check our source code using ESLint (known as linting)
* We use Webpack to bundle all our JS and Sass files together into one bundle, which can then be deployed to any static file host (we're using Firebase Hosting).
## What's next for Crowd Search
The features presented here function as an MVP to showcase what the platform could be used for. There's a lot of possibilities for extension, with a few examples being:
* Interactive maps
* Situational timelines
* Contact information
|
## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once.
|
As a tourist, one faces many difficulties from the moment he lands. Finding the right tourist guide, getting them at the right price and getting rid of the same old monotonous itinerary; These are some of the problems which motivated us to build Travelogue.
Travelogue's greatest strength is that it creates a personalized user experience in every event or interaction that the user has or is related to the user's interest in some way or the other. It successfully matches customers to signed up local guides using it's advanced matching allgorithm.
The project is based with its front end developed in HTML, CSS, JavaScript, PHP, jQuery and Bootstrap. This is supported by a strong back end Flask framework (Python). We used the basic SQLite3 database for instant prototype deployment of the idea and used Java and Faker library to create massive amounts of training data.
Throughout the project, we ran into designing issues such as frame layouts and how to display the data in the most effective manner, we believe in this constant update of design which would result in the creation of the best design suited for our web application. We also faced some difficulty while integrating the back end of the application written in Python with the front end created using JavaScript and PHP.
This hack was a solution to the common existing problem of matching people with similar interests. We brainstormed a lot and came up with a 3 step algorithm to tackle the issue. The algorithm focuses on the importance of features and how a customer relates to the guide using those features.
Working with a team of diverse skill sets, all of us learned a lot from each other and by overcoming the challenges we faced during the development of this web application.
We have planned to incorporate Facebook's massive data and widely popular Graph API to extract features and interests for matching customers with the local guides. After implementing these ideas, the user has a simple one click Facebook login which helps our matching algorithm work on large amount of meaningful data.
|
partial
|
## Inspiration
**The traffic, congestion, time-consumptions, and hectic life in megacities to find a parking slot for their cars is very difficult.**
## What it does
**Parking Assistant using OpenCV which helps in detecting the Parking Slots when required as per user’s request. It is highly accurate and sends an instant notification to the user which uses Twilio to send the message (DM) instantly.**
## How I built it
**Using Python, MaskRCNN, OpenCV, and Twilio (to send messages)**
## Challenges I ran into
**Training the model**
**Specifically finding the Cars**
**Checking for the Space Available**
**Sending Message to the User**
## Accomplishments that I'm proud of
**Achieved my goal and project submitted on time that I was scared of.**
## What I've learned
* Lots of work to do solely, the requirement of a team, time management, and finally meaning of the word Hackathon.\*
## What's next for Parking Assistant using OpenCV
**Real-Life implementation of the use case, tune the model more, making it scalable, making it work at the nights as well.**
|
## Inspiration
DeliverAI was inspired by the current shift we are seeing in the automotive and delivery industries. Driver-less cars are slowly but surely entering the space, and we thought driverless delivery vehicles would be a very interesting topic for our project. While drones are set to deliver packages in the near future, heavier packages would be much more fit for a ground base vehicle.
## What it does
DeliverAI has three primary components. The physical prototype is a reconfigured RC car that was hacked together with a raspberry pi and a whole lot of motors, breadboards and resistors. Atop this monstrosity rides the package to be delivered in a cardboard "safe", along with a front facing camera (in an Android smartphone) to scan the faces of customers.
The journey begins on the web application, at [link](https://deliverai.github.io/dAIWebApp/). To sign up, a user submits webcam photos of themselves for authentication when their package arrives. They then select a parcel from the shop, and await its arrival. This alerts the car that a delivery is ready to begin. The car proceeds to travel to the address of the customer. Upon arrival, the car will text the customer to notify them that their package has arrived. The customer must then come to the bot, and look into the camera on its front. If the face of the customer matches the face saved to the purchasing account, the car notifies the customer and opens the safe.
## How we built it
As mentioned prior, DeliverAI has three primary components, the car hardware, the android application and the web application.
### Hardware
The hardware is built from a "repurposed" remote control car. It is wired to a raspberry pi which has various python programs checking our firebase database for changes. The pi is also wired to the safe, which opens when a certain value is changed on the database.
\_ note:\_ a micro city was built using old cardboard boxes to service the demo.
### Android
The onboard android device is the brain of the car. It texts customers through Twilio, scans users faces, and authorizes the 'safe' to open. Facial recognition is done using the Kairos API.
### Web
The web component, built entirely using HTML, CSS and JavaScript, is where all of the user interaction takes place. This is where customers register themselves, and also where they order items. Original designs and custom logos were created to build the website.
### Firebase
While not included as a primary component, Firebase was essential in the construction of DeliverAI. The real-time database, by Firebase, is used for the communication between the three components mentioned above.
## Challenges we ran into
Connecting Firebase to the Raspberry Pi proved more difficult than expected. A custom listener was eventually implemented that checks for changes in the database every 2 seconds.
Calibrating the motors was another challenge. The amount of power
Sending information from the web application to the Kairos API also proved to be a large learning curve.
## Accomplishments that we're proud of
We are extremely proud that we managed to get a fully functional delivery system in the allotted time.
The most exciting moment for us was when we managed to get our 'safe' to open for the first time when a valid face was exposed to the camera. That was the moment we realized that everything was starting to come together.
## What we learned
We learned a *ton*. None of us have much experience with hardware, so working with a Raspberry Pi and RC Car was both stressful and incredibly rewarding.
We also learned how difficult it can be to synchronize data across so many different components of a project, but were extremely happy with how Firebase managed this.
## What's next for DeliverAI
Originally, the concept for DeliverAI involved, well, some AI. Moving forward, we hope to create a more dynamic path finding algorithm when going to a certain address. The goal is that eventually a real world equivalent to this could be implemented that could learn the roads and find the best way to deliver packages to customers on land.
## Problems it could solve
Delivery Workers stealing packages or taking home packages and marking them as delivered.
Drones can only deliver in good weather conditions, while cars can function in all weather conditions.
Potentially more efficient in delivering goods than humans/other methods of delivery
|
## Inspiration
As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system
## What it does
We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning
## How we built it
Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle
## Challenges we ran into
One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly
## Accomplishments that we're proud of
Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight!
## What we learned
We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future
## What's next for Anti-Bumper Car - A Collision Avoidance System
Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
|
partial
|
# yhack
JuxtaFeeling is a Flask web application that visualizes the varying emotions between two different people having a conversation through our interactive graphs and probability data. By using the Vokaturi, IBM Watson, and Indicoio APIs, we were able to analyze both written text and audio clips to detect the emotions of two speakers in real-time. Acceptable file formats are .txt and .wav.
Note: To differentiate between different speakers in written form, please include two new lines between different speakers in the .txt file.
Here is a quick rundown of JuxtaFeeling through our slideshow: <https://docs.google.com/presentation/d/1O_7CY1buPsd4_-QvMMSnkMQa9cbhAgCDZ8kVNx8aKWs/edit?usp=sharing>
|
## Inspiration
As university students, we and our peers have found that our garbage and recycling have not been taken by the garbage truck for some unknown reason. They give us papers or stickers with warnings, but these get lost in the wind, chewed up by animals, or destroyed because of the weather. For homeowners or residents, the lack of communication is frustrating because we want our garbage to be taken away and we don't know why it wasn't. For garbage disposal workers, the lack of communication is detrimental because residents do not know what to fix for the next time.
## What it does
This app allows garbage disposal employees to communicate to residents about what was incorrect with how the garbage and recycling are set out on the street. Through a checklist format, employees can select the various wrongs, which are then compiled into an email and sent to the house's residents.
## How we built it
The team built this by using a Python package called **Kivy** that allowed us to create a GUI interface that can then be packaged into an iOS or Android app.
## Challenges we ran into
The greatest challenge we faced was the learning curve that arrived when beginning to code the app. All team members had never worked on creating an app, or with back-end and front-end coding. However, it was an excellent day of learning.
## Accomplishments that we're proud of
The team is proud of having a working user interface to present. We are also proud of our easy to interactive and aesthetic UI/UX design.
## What we learned
We learned skills in front-end and back-end coding. We also furthered our skills in Python by using a new library, Kivy. We gained skills in teamwork and collaboration.
## What's next for Waste Notify
Further steps for Waste Notify would likely involve collecting data from Utilities Kingston and the city. It would also require more back-end coding to set up these databases and ensure that data is secure. Our target area was University District in Kingston, however, a further application of this could be expanding the geographical location. However, the biggest next step is adding a few APIs for weather, maps and schedule.
|
## Inspiration
We have been through a phase where we had strong feelings over an issue (police brutality, racism, etc) and wanted to help spread the word to unite and take a stand against the issue.
With blogs and websites being used primary to shed light on issues to people around the world, it is increasingly becoming important to the internet as a way to spread public awareness. With the world entering the age of AI, we thought of ways to use artificial intelligence to help get the world to get involved in the issues they care about.
## What it does
We present webGen.ai, where we have a service powered by AI and computer vision to help users generate websites with meaningful content instantly. By scribbling your desired web design on a piece of paper your desired web design, the app is able to take your drawing and generate your dream website instantly. Then based on the topic that you care about, the app finds resources and content from the Internet to populate on to your site. With a large set of information that can be used to educate you, your friends and family, one can spread one precious idea to one that captures the attention of everyone to across the globe.
With many websites being generated and shared in order to help educate in certain issues, an internet presence can be made, and an issue finally can be solved.
On a more casual note, it is also capable of getting information for the user about a certain topic that he or she may be curious in, such as learning to play the guitar.
## How we built it
We used OpenCV for the computer vision in order to detect the types of regions and icons to determine on how the site will be designed. We used stdlib to host our AI generated websites and coordinate our APIs to work together harmoniously.
## Challenges we ran into
One of the hardest challenges was to integrate everything together, varying from the computer vision, web scraping, to the full stack. We had to make sure that everything as working together properly without breaking.
## Accomplishments that we're proud of
We are happy to be using AI to help other people spread awareness of issues important to them and help educate the public about problems that should be tackled in order to help us aim for a better society.
## What we learned
We learned about using the stdlib to host our sites and using their API services as well as Python Flask.
## What's next for webGen.ai
Allow generated websites to be more capable of more complex tasks such as Javascript functionality or service calls
Better design for the websites.
Added more control and flexibility into designing the website.
|
winning
|
## Inspiration
So many people around the world, including those dear to us, suffer from mental health issues such as depression. Here in Berkeley, for example, the resources put aside to combat these problems are constrained. Journaling is one method commonly employed to fight mental issues; it evokes mindfulness and provides greater sense of confidence and self-identity.
## What it does
SmartJournal is a place for people to write entries into an online journal. These entries are then routed to and monitored by a therapist, who can see the journals of multiple people under their care. The entries are analyzed via Natural Language Processing and data analytics to give the therapist better information with which they can help their patient, such as an evolving sentiment and scans for problematic language. The therapist in turn monitors these journals with the help of these statistics and can give feedback to their patients.
## How we built it
We built the web application using the Flask web framework, with Firebase acting as our backend. Additionally, we utilized Microsoft Azure for sentiment analysis and Key Phrase Extraction. We linked everything together using HTML, CSS, and Native Javascript.
## Challenges we ran into
We struggled with vectorizing lots of Tweets to figure out key phrases linked with depression, and it was very hard to test as every time we did so we would have to wait another 40 minutes. However, it ended up working out finally in the end!
## Accomplishments that we're proud of
We managed to navigate through Microsoft Azure and implement Firebase correctly. It was really cool building a live application over the course of this hackathon and we are happy that we were able to tie everything together at the end, even if at times it seemed very difficult
## What we learned
We learned a lot about Natural Language Processing, both naively doing analysis and utilizing other resources. Additionally, we gained a lot of web development experience from trial and error.
## What's next for SmartJournal
We aim to provide better analysis on the actual journal entires to further aid the therapist in their treatments, and moreover to potentially actually launch the web application as we feel that it could be really useful for a lot of people in our community.
|
View the SlideDeck for this project at: [slides](https://docs.google.com/presentation/d/1G1M9v0Vk2-tAhulnirHIsoivKq3WK7E2tx3RZW12Zas/edit?usp=sharing)
## Inspiration / Why
It is no surprise that mental health has been a prevailing issue in modern society. 16.2 million adults in the US and 300 million people in the world have depression according to the World Health Organization. Nearly 50 percent of all people diagnosed with depression are also diagnosed with anxiety. Furthermore, anxiety and depression rates are a rising issue among the teenage and adolescent population. About 20 percent of all teens experience depression before they reach adulthood, and only 30 percent of depressed teens are being treated for it.
To help battle for mental well-being within this space, we created DearAI. Since many teenagers do not actively seek out support for potential mental health issues (either due to financial or personal reasons), we want to find a way to inform teens about their emotions using machine learning and NLP and recommend to them activities designed to improve their well-being.
## Our Product:
To help us achieve this goal, we wanted to create an app that integrated journaling, a great way for users to input and track their emotions over time. Journaling has been shown to reduce stress, improve immune function, boost mood, and strengthen emotional functions. Journaling apps already exist, however, our app performs sentiment analysis on the user entries to help users be aware of and keep track of their emotions over time.
Furthermore, every time a user inputs an entry, we want to recommend the user something that will lighten up their day if they are having a bad day, or something that will keep their day strong if they are having a good day. As a result, if the natural language processing results return a negative sentiment like fear or sadness, we will recommend a variety of prescriptions from meditation, which has shown to decrease anxiety and depression, to cat videos on Youtube. We currently also recommend dining options and can expand these recommendations to other activities such as outdoors activities (i.e. hiking, climbing) or movies.
**We want to improve the mental well-being and lifestyle of our users through machine learning and journaling.This is why we created DearAI.**
## Implementation / How
Research has found that ML/AI can detect the emotions of a user better than the user themself can. As a result, we leveraged the power of IBM Watson’s NLP algorithms to extract the sentiments within a user’s textual journal entries. With the user’s emotions now quantified, DearAI then makes recommendations to either improve or strengthen the user’s current state of mind. The program makes a series of requests to various API endpoints, and we explored many APIs including Yelp, Spotify, OMDb, and Youtube. Their databases have been integrated and this has allowed us to curate the content of the recommendation based on the user’s specific emotion, because not all forms of entertainment are relevant to all emotions.
For example, the detection of sadness could result in recommendations ranging from guided meditation to comedy. Each journal entry is also saved so that users can monitor the development of their emotions over time.
## Future
There are a considerable amount of features that we did not have the opportunity to implement that we believe would have improved the app experience. In the future, we would like to include video and audio recording so that the user can feel more natural speaking their thoughts and also so that we can use computer vision analysis on the video to help us more accurately determine users’ emotions. Also, we would like to integrate a recommendation system via reinforcement learning by having the user input whether our recommendations improved their mood or not, so that we can more accurately prescribe recommendations as well. Lastly, we can also expand the APIs we use to allow for more recommendations.
|
## Inspiration
As students, we have had to face countless interviews, whether it be for school clubs, research positions, or those coveted internships. A majority of these interviews are held virtually, through zoom or automated platforms like HireVue. These interviews and the days leading up to them are often filled with nervousness and anxiety. Often it is also hard to schedule mock interviews with engineers in positions that are similar to those that would conduct interviews. Mostly, students end up winging this process and achieving poor results. Having worked in early stage startups before, we saw a similar trend in early stage founders who were applying to incubators like YCombinator and TechStars or even raising their seed and Series A rounds. The resemblance of the problem allowed us to think even more about the problems that founders face with the biggest being a fear of the actual investors themselves. So we decided to solve this problem through repetition and muscle memory.
## What it does
SharkProof is an all in one tool to perfect your interview skills and prepare for your best interview performance. It has two main target audiences, students and founders. Currently, we have focused the tool primarily for founders, especially those founders who have upcoming pitches or interviews with VCs and incubators. Once a founders logs on to the platform, they are allowed to choose a persona that will interview them. This persona can be another founder, such as Elon Musk, or a potential VC like Peter Thiel. Essentially it will let you choose or create the persona of the person that is actually going to be interviewing you in the real world. Once the persona has been selected, you will enter the interview room. In the interview room, the persona you have selected will interview you based on their personality (Mark Cuban might ask for more financial details while Oprah might ask for more impact related information). On top of this, the interviewer’s voice will be matched to their actual voice, allowing you to truly imagine that you are speaking to the actual person. And once your interview is completed, you will receive feedback detailing exactly what you need to work on and what you did well on. And when we say detailed, we truly mean detailed. On top of an overall interview score out of 100, you will be told what emotions you are depicting the most throughout your interview as well as emotions for specific questions. And the same will apply for your hand gestures and facial expressions. Suppose you are touching your hair too often or talking while not making eye contact, you will know exactly when that occurred and how to fix it.
## How we built it
We built this using technologies from Hume AI, Cartesia, Google’s Gemini Model, Whisper from OpenAI and Groq. For the backend, we built it using a Flask backend with Python. We used Flask because it allows for us to setup the backend in Python which was ideal since we were making many LLM and API calls to AI models. For the frontend, we decided to go with React because of its component reusability features, its integration abilities with Flask, and its efficiency due to the Virtual DOM features. The two foundational models we use are Hume AI’s EVI (voice-to-voice) model and the Facial Expressions Model. The EVI model takes in the user’s audio input and converts it into an audio and text embedding. It then uses Hume's emotion mapping technology and identifies scores for prevalence of 48 different emotions in the interview. We do this in a sentence by sentence method so your emotions are accurately tracked across the interview. This is then used by Gemini within the Hume model configuration to understand and create a response as well as new follow-up questions, which are outputted as the voice of the interviewer. Our second model, the Facial Expressions Model is used to identify the emotions of the face of the interviewee and then mapped onto the correct portions of the audio so that we can make sure that the voice and the facial emotions are in line. Next we sent all of this data to the backend as soon as the interview is completed and use a Llama 3.1 model that we feed with a custom interview scoring algorithms based off of weights we decide as well as a holistic response quality analysis by the LLM model and output feedback and an interview score. This is also then displayed graphically for better understanding.
## Challenges we ran into
One of the biggest challenges we ran into was coordinating and measuring the data from the Hume models. This was because the voice-to-voice model would give emotion outputs in the sentence to sentence interval while the facial expressions model would only accept videos in a 5 second batch. So recursively splitting and processing the video in 5 seconds, which we also did using Whisper and mapping that onto the voice data was quite difficult because there was no simple way to achieve that. This led to some completed dictionaries nested within dictionaries and many calls in the backend to process the data accurately. Another challenge was to come up with a custom algorithm to determine which emotions and which facial expressions should have what sort of weights in our model and how much of an effect they should have for our overall score’s equation. And lastly, we had a little bit of trouble with making sure the web sockets were routing our traffic correctly and opening and closing when we wanted them to.
## Accomplishments that we're proud of
Some major accomplishments that we were proud of were actually integrating AI personas within our application with voices that are super realistic. We were also really proud of taking on the challenge of essentially merging the two Hume models which was a major challenge that even the Hume team is currently working on. And lastly, we thoroughly enjoyed making a project that we ourselves would use to improve our interview preparation skills in the future.
## What we learned
We learned the importance of having a deep understanding of the model architecture and the input and output sequences of models for projects that are closely related to maximizing the model’s potential. We also learned about handling binary file data from frontend to backend and vice versa. We got the chance to delve into web socket programming and understand the importance of clear communications between the data types that are handled in the backend
## What's next for SharkProof
Something we really wanted to implement but didn’t get the chance to do was create AI deepfakes of the interviewers. This would offer a complete persona of the interviewer, essentially building muscle memory for the founder which could be tapped into when they go into the actual interview. One other feature we wanted to incorporate was the ability for students to add their resume and job description, allowing for the interviewer to be pre-aware of their skills and experience level and ask questions based on those inputs. That would lead to the ideal interview simulation environment.
|
winning
|
## 💡 Inspiration 🌍
Our inspiration stems from a deep commitment to inclusivity. Recognizing the challenges faced by disabled individuals in accessing the internet, our team set out to create a solution that empowers users to browse effortlessly using only one’s face. Our goal is to break down barriers and provide a seamless online experience for everyone.
## 🎙️ What Gazy does 🚀
Gazy revolutionizes web browsing by allowing users to navigate entirely with their face. Whether it's blinking to click, tilting to scroll, or even talking to type, Gazy provides a comprehensive and intuitive interface that caters specifically to the needs of disabled individuals. It opens up a world of possibilities, ensuring that browsing the internet becomes a more accessible and enjoyable experience.
## 💬 How we built Gazy 🛠️
The front-end was developed on React and the user authentication was built on Firebase. The backend was developed on Python, we harnessed the power of MediaPipe to implement real-time face detection and landmark recognition. To play audio, we utilized PyAudio, and we leveraged Google Cloud’s Speech to Text to convert voice commands to text. Lastly, PyGUI was used to craft the actual mouse movement, scrolling, clicking, and typing!
## 🚧 Challenges we ran into 🚧
Throughout this hackathon, we ran into a plethora of challenges. The main challenge proved to be iris tracking. The intricacies required to accurately track the movement of the iris and projecting that movement onto the 2D screen as mouse movement proved to be difficult. Additionally, identifying a dependable reference point for facial gestures presented its own set of challenges. Our team iteratively tested different scaling factors and points in order to optimize the result. Finally, during the development process, some of our teammates fell ill. However, overcoming these health challenges highlighted the resilience of our team members.
## 🌟 Accomplishments that we're proud of 🎉
We are proud of developing a fully deployed, functional product in only 36 hours. Our team worked cohesively leveraging each member’s strengths to streamline the development process. Specifically, our team takes pride in navigating the challenge of facial movement in innovative ways. By thinking outside the box, our team crafted solutions through comparing the delta x and y coordinates, and taking the tan to find the angle between the head and the y-axis. These innovative methods not only demonstrate our problem-solving skills, but also our persistence to our cause.
## 🚀 What we learned 📚
We learned a lot about OpenCV! Specifically, utilizing OpenCV in tangent with a variety of Python libraries and pretrained models to perform a variety of tasks on the user's face through the process of landmarking. Furthermore, our front-end developer learned to create a fully responsive website on React to complement our software.
## 💫 What's next for Gazy 🌟
Though fully functional, our mouse movement is a little janky, and the eye tracking is not fully complete. Our goal in the future is to smoothen out the movement of the mouse, projecting the user's gaze directly onto the screen and completely eliminating the need to move the mouse. Furthermore, we wish to implement voice activated commands to facilitate the process of navigating the app, and possibly an emergency alarm button for hospitals, in the case that patients are unable to move.
## 🌐 Best Domain Name from Domain.com
As a part of our project, we registered helpinghand.select using GoDaddy! You can also access it [here](https://www.helpinghand.select/).
|
## Inspiration
The ability to easily communicate with others is something that most of take for granted in our everyday life. However, for the millions of hearing impaired and deaf people all around the world, communicating their wants and needs is a battle they have to go through every day. The desire to make the world a more accessible place by bringing ASL to the general public in a fun and engaging manner was the motivation behind our app.
## What it does
Our app is essentially an education platform for ASL that is designed to also be fun and engaging. We provide lessons for basic ASL such as the alphabet, with plans to introduce more lessons in the future. What differentiates our app and makes it engaging, is that users can practice their ASL skills right in the app, with any new letter or word they learn, the app uses their webcam along with AI to instantly tell users when they are making the correct sign. The app also has a skills game that puts what they learnt to the test, in a time trial, that allows users to earn score for every signed letter/word. There is also a leaderboard so that users can compete globally and with friends.
## How we built it
Our app is a React app that we built with different libraries such as MUI, React Icons, Router, React-Webcam, and most importantly Fingerpose along with TensorflowJS for all our AI capabilities to recognize sign language gestures in the browser.
## Challenges we ran into
Our main struggle within this app was implementing Tensorflowjs as none of us have experience with this library prior to this event. Recognizing gestures in the browser in real time initially came with a lot of lag that led to a bad user experience, and so it took a lot of configuring and debugging in order to get a much more seamless experience.
## Accomplishments that we're proud of
As a team we were initially building another application with a similar theme that involved hardware components and we had to pivot quite late due to some unforseen complications, and so we're proud of being able to turn around with such a short amount of time and make a good product that we would be proud to show anyone. We're also proud of building a project that also has a real world usage to it that we all feel strongly about and that we think really does require a solution for.
## What we learned
Through this experience we all learned more about React as a framework, in addition to real time AI with Tensorflowjs.
## What's next for Battle Sign Language
Battle Sign Language has many more features that we would look to provide in the future, we currently have limited lessons, and our gestures are limited to the alphabet, so in the future we would increase our app to include more complex ASL such as words or sentences. We would also look forward to adding multiplayer games so that people can have fun learning and competing with friends simultaneously.
|
## Inspiration
This project was inspired by the rising issue of people with dementia. Symptoms of dementia can be temporarily improved by regularly taking medication, but one of the core symptoms of dementia is forgetfulness. Moreover, patients with dementia often need a caregiver, who is often a family member, to manage their daily tasks. This takes a great toll on both the caregiver, who is at higher risk for depression, high stress levels, and burnout. To alleviate some of these problems, we wanted to create an easy way for patients to take their medication, while providing ease and reassurance for family members, even from afar.
## Purpose
The project we have created connects a smart pillbox to a progressive app. Using the app, caregivers are able to create profiles for multiple patients, set and edit alarms for different medications, and view if patients have taken their medication as necessary. On the patient's side, the pillbox is not only used as an organizer, but also as an alarm to remind the patient exactly when and which pills to take. This is made possible with a blinking light indicator in each compartment of the box.
## How It's Built
Design: UX Research: We looked into the core problem of Alzheimer's disease and the prevalence of it. It is estimated that half of the older population do not take their medication as intended. It is a common misconception that Alzheimer's and other forms of dementia are synonymous with memory loss, but the condition is much more complex. Patients experience behavioural changes and slower cognitive processes that often require them to have a caretaker. This is where we saw a pain point that could be tackled.
Front-end: NodeJS, Firebase
Back-end: We used azure to host a nodeJS server and postgres database that dealt with the core scheduling functionality. The server would read write and edit all the schedules and pillboxes. It would also decide when the next reminder was and ask the raspberry pi to check it. The pi also hosted its own nodeJS server that would respond to the azure server for requests to check if the pill had been taken by executing a python script that directly interfaced with the general purpose input-output pins.
Hardware: Raspberry Pi: Circuited a microswitch to control an LED that was engineered into the pillbox and programmed with Python to blink at a specified date and time, and to stop blinking either after approx 5 seconds (recorded as a pill not taken) or when the pillbox is opened and the microswitch opens (recorded as a pill taken).
## Challenges
* Most of us are new with Hackathons, and we have different coding languages abilities. This caused our collaboration to be difficult due to our differences in skills.
* Like many others, we have time constraints, regarding our ideas, design and what was feasible within the 24 hours.
* Figuring out how to work with raspberry pi, how to connect it with nodeJS and React App.
* Automatically schedule notifications from the database.
* Setting up API endpoints
* Coming up with unique designs of the usage of the app.
## Accomplishments
* We got through our first Hackaton, Wohoo!
* Improving skills that we are strong at, as well as learning our areas of improvement.
* With the obstacles we faced, we still managed to pull out thorough research, come up with ideas and concrete products.
* Actually managed to connect raspberry pi hardware to back-end and front-end servers.
* Push beyond our comfort zones, mentally and physically
## What's Next For nudge:
* Improve on the physical design of the pillbox itself – such as customizing our own pillbox so that the electrical pieces would not come in contact with the pills.
* Maybe adding other sensory cues for the user, such as a buzzer, so that even when the user is located a room away from the pillbox, they would still be alerted in terms of taking their medicines at the scheduled time.
* Review the codes and features of our mobile app, conduct a user test to ensure that it meets the needs of our users.
* Rest and Reflect
|
losing
|
## Inspiration
We really enjoy Virtual Reality and wanted to work with the Oculus DK2. Combining this with our love for music, we thought it would be interesting and fun to find a way to visualize music in a three - dimensional context.
## What it does
HearVR is a program that combines Machine Learning with Virtual Reality to create an environment where SoundCloud music files can be played and visualized through a frequency spectrum and user - created comments. Music files and corresponding comments are gathered from SoundCloud and a sentiment analysis is performed on the comments. The music files are then played in a virtual three - dimensional environment where each song has a corresponding frequency spectrum and a stream of comments that are color - coordinated to represent how positive or negative it is. The user can traverse the virtual space to explore different songs.
## How we built it
We wrote a Python script that uses multiprocessing to concurrently download and process SoundCloud files as it retrieves SoundCloud comments and communicates with an Azure web service that runs a sentiment analysis on the comments and gives it a score based on how positive or negative the comment is. Each music file's comments, the comments' scores, and additional information about the comments are written in a Comma Separated Value (CSV) file. The CSV file for each song is accessed by Unity, which uses C# with Visual Studio as an IDE to parse the CSV and create the corresponding frequency spectrum and comments. We then designed the virtual environment in Unity to produce a visual layout that displays the frequency spectrum and streams the comments according to their time stamp on SoundCloud for each song.
## Challenges we ran into
Since we were working with VR, the technology we were using was very immature. We initially started out with Unreal Engine for our project - however, we quickly found that Unreal's audio engine was buggy and unreliable. After too many hours, we switched to Unity, a tool which none of us had worked with. Unity was a huge learning curve, but we pushed through. However, we had more issues - Unity can't decode MP3's and our plan to stream from SoundCloud was over. Instead, we did some trickery using Python to preprocess the MP3's into WAV files before feeding them to Unity. On the backend, we struggled through Microsoft Azure, which was also new technology to us.
## Accomplishments that we're proud of
We're really proud of combining our interests in Machine Learning and Virtual Reality together to create a unique program that enhances user experience in listening to music.
## What we learned
We learned new technologies such as Microsoft Azure, Unity, and how to integrate these different technologies together. Additionally, we learned a new language in C#.
## What's next for HearVR
We would like to make HearVR a world that generates music automatically based on the person's preferences and which songs they listened to. In addition, we would like to make this a networked experience, so multiple people can listen in on the same session
|
## Inspiration
We wanted to create a device that ease the life of people who have disabilities and with AR becoming mainstream it would only be proper to create it.
## What it does
Our AR Headset converts speech to text and then displays it realtime on the monitor to allow the user to read what the other person is telling them making it easier for the first user as he longer has to read lips to communicate with other people
## How we built it
We used IBM Watson API in order to convert speech to text
## Challenges we ran into
We have attempted to setup our system using the Microsoft's Cortana and the available API but after struggling to get the libraries ti work we had to resort to using an alternative method
## Accomplishments that we're proud of
Being able to use the IBM Watson and unity to create a working prototype using the Kinect as the Web Camera and the Oculus rift as the headset thus creating an AR headset
## What we learned
## What's next for Hear Again
We want to make the UI better, improve the speed to text recognition and transfer our project over to the Microsoft Holo Lens for the most nonintrusive experience.
|
## Inspiration; We want to test the boundaries of Virtual Reality and real life. We want to know if you can immerse yourself so strongly in a Virtual environment that your sensory perception is affected. Can you \_ feel \_ the wind from the virtual environment? Ask yourself if you have goosebumps because of how cold the weather is...virtually? We're going to show you \*\* what the *world* is feeling. \*\*
## What it does; Project HappyMedia interacts with the user on a web application that accesses information about what the world is \_ currently \_ feeling. On the backend, it is moved into a Database (using Mongodb) and the most current (today's) rating is recalled. It then simulates the feelings of the world into a virtual world using Unity. We create a mood sensitive environment through landforms, weather, time of day, etc. The viewer doesn't get to know the mood of the world until they put on the Oculus and *feel* the mood of the world. Our Oculus simulation currently runs separate to the web application because we had no access to an Oculus and wouldn't be able to test the functionality of the front end web application if it ran straight to the Oculus.
## How I built it; The Oculus environments were built in Unity using Oculus and Unity APIs. The Front end was built using Sublime as a text editor and using MongoDB and Node.js. The API we used to get the global happiness takes people's input all across the world and it updates with respect to the timezones (it updates more than once a day). It is built for mood tracking across the globe. The API also gives us information about the global rate of happiness. For our use of it, it also does update as the happiness factor goes up and down throughout the day.
## Challenges I ran into; We ran into a lot of challenges with the database (getting values OUT of the database using indexing in particular). We really wished that MongoDB had a representative that could have assisted us because it seemed like there were some small issues in which we couldn't get enough community support and help with on the internet but they could have quickly resolved. Of course, we cannot reiterate enough how hard it is to try to develop for hardware without having it to debug and test. We were really really interested in using it from the beginning which is why we persisted with the project regardless and we hope it doesn't affect the judging too much that we are not able to provide a full on demo.
## Accomplishments that I'm proud of; Every single team member worked with technology and software that was challenging for them as well as our project was definitely challenging for us from the beginning. We knew that it would have been logical to do a web app OR an Oculus simulation and that linking them was going to be very tricky, however, we were passionate that our idea was extremely worthwhile and any progress we make could later be improved and updated. It's definitely an idea that we are proud to work on and want to continue to see through to the finish.
## What I learned; We all learned an incredible amount from what we worked with in our separate tasks; Unity, Mongo, Node.js, etc. We also learned a lot about modularizing, not really self motivated but simply because we couldn't build the project seamlessly to work from start to finish without the Oculus. Therefore we had to separate it in a manner that could be stitched together easily at a later date. We definitely learned a lot from the talks and workshops along the way and I think that is a large reason we were so motivated not to give up on the two tasks that are independent of eachother but really do go together to make a set. Simply creating an Oculus Rift/Unity world simulation has been done before. It lacks creativity and purpose and mostly makes use of Unity's terrain builder. On the other hand, a Web app simply returning the happiness rating of the world currently is definitely cool, but how long before you forget about it and never use it? It would make almost no impression whatsoever on the user. It could be as forsaken as checking the weather. Together it stimulates interest, curiosity and mystery about the boundaries of Virtual Reality and human connectivity.
## What's next for HappyMedia; We hope to impress the judges with our application of the Oculus Rift API enough to get the DK2 prizes and with that continue to implement our ideas for this. The amount we have done in one weekend on this project is huge, monumental in fact, and if we can work on things at even half the pace we are going to be beta testing in no time. We have high hopes to get something testable by the end of this year and hopefully get some more Unity/Oculus worlds developed as well.
|
partial
|
## Inspiration
As we wanted to explore Canada post covid lockdowns, we wanted to know which provinces were safe to travel to. This leads us to "COVID watchdog", a web application that allows us to plan safe and exciting trips.
## What it does
We developed a web application that is a one-way stop for all COVID information and informs us whether it's recommended to travel to a specific province or region.
## How we built it
By utilizing the MERN stack and the APIs api.opencovid.ca and api.covid19tracker.ca we were able to gather statistical data for provinces and health regions. We analyze covid cases, vaccination rates, population information and through a built-in model, give a recommendation of whether you should travel to the region/province. Furthermore, we utilize react simple maps we display this data in a user-friendly manner.
## Challenges we ran into
Allowing our application to be as efficient as possible by speeding up API access calls without slowing down the application. Issues were faces regarding promises and async functions that caused blockers. Some libraries crashed due to the data values being parsed. All issues were thankfully resolved.
## Accomplishments that we're proud of
Developing and coordinating in order to develop an application that we believe can be useful to all Canadian citizens who wish to plan a holiday or weekend trip. In addition to new learning experiences and working with great teammates, we all seemed to come out with smiles on our faces.
## What we learned
Coordination and more of the complex aspects of the MERN stacks. Dealing with large data sets, filtering them, and developing user-friendly models were fields we developed throughout the hackathon. In addition to this, the short time period allowed us to be on high alert and work in a structured and professional manner.
## What's next for COVID WATCHDOG
We hope that COVID watchdog could, later on, help users not just plan their trips in Canada, but around the world. In addition to providing health organizations and other parties useful information as we proceed with the steps following the COVID lockdown.
|
## Inspiration for this Project
Canada is a beautiful country, with lots of places for people to visit, ranging from natural scenery, to the wide range of delicacies, and culture. With large variety, comes a natural sense of indecisiveness, and confusion. So we decided to make an application to help people, narrow down their choices. This way you can make an easy decision as well as take someone with you, because why experience it on your own, when you can share it with someone special, right?
## What it does
Our project, basically takes the Google Maps API, and using its data, you're able to search and see the ratings and reviews, for activities, and spots that you may experience with someone special... (specifically "date spots"). This allows you to get the outside opinion of others, in just a few clicks; as well as leave your own, to help others !
## How we built it
Splitting our squad of four hackers, we decided to do a front-end, and back-end split. Where we had two of our members, using JavaScript, and their expertise, to iterate and create specific algorithms, and features for our project. They used the Google Maps API, to create a search feature, that autofills locations, as well as a map, and a display that shows the images of the place you want to go to. They also compiled code for the ratings, review, and even a log in system. So you can have a personal experience. This all while the front-end team, developed the website, and themes. Using PUG, a template engine that compiles to HTML, and simplifies the syntax for more efficient, and easy to read programs. The front-end development duo, used CSS to style, and html, to display the website that you have in front of you for convenient accessibility.
## Challenges we ran into
Being comprised of mostly newbie hackers, we ran into a lot of challenges, and obstacles to tackle. Even just getting started, we started running into slight problems and complications, as we were deciding on how to split roles, two of our members were much more experienced with coding, and we could either put the more experienced members on back-end to create more advanced features, or have the two more beginner members, to do back-end, and leave the others to do front-end for an easier time. Instead, we decided to take on the challenge, to allow the more advanced coders to do back-end, so we could seriously create a cool and impactful project, while the two others would learn, to create and apply front-end developments to our code. This seemingly simple challenge, really took us for a run, as the front-end developers ended up spending several hours just learning how to format and code in HTML, CSS, while using pug, but eventually we were able to scrape something up (after about 8 hours lol). Our seemingly crazy, and almost goofy idea, started to become more realistic, as our back-end duo finished and started implementing our new features into our website.
## Accomplishments that we're proud of
Nearing the end of this project, we had completed a lot of our learning, and gained a sense of fluency in a lot of the new languages, and obstacles we had to overcome. This left us with a sense of achievement, and comfort, as despite our struggles, we had something to show for it. Also, this being most of our first hackathon, we're happy we managed to dish out a unique idea, and successfully work on it too (as well as being able to stay awake for to complete it).
## What we learned
Honestly, this was really a journey for all of us, we learned many new things, from teamwork skills, to even new programming languages, and techniques. Some of the highlights being, the front-end developers learning how to code in HTML and CSS completely from scratch, in the span of 6 hours, and the back-end developers learning about the Google Maps API, and how to operate through it, and implement it (also starting from nothing).
## What's next for Rate My Date
The next step for Rate My Date, would be to further polish the code, and make sure that there are no bugs, as it was a pretty ambitious project for 24 hours. We were all very excited to be making this idea come true, and are looking forward to its future changes, and potential.
|
## Inspiration
In 2020, Canada received more than 200,000 refugees and immigrants. The more immigrants and BIPOC individuals I spoke to, the more I realized, they were only aiming for employment opportunities as cab drivers, cleaners, dock workers, etc. This can be attributed to a discriminatory algorithm that scraps their resumes, and a lack of a formal network to engage and collaborate in. Corporate Mentors connects immigrants and BIPOC individuals with industry professionals who overcame similar barriers as mentors and mentees.
This promotion of inclusive and sustainable economic growth has the potential of creating decent jobs and significantly improving living standards can also aid in their seamless transition into Canadian society. Thereby, ensuring that no one gets left behind.
## What it does
To tackle the global rise of unemployment and increasing barriers to mobility for marginalized BIPOC communities and immigrants due to racist and discriminatory machine learning algorithms and lack of networking opportunities by creating an innovative web platform that enables people to receive professional mentorship and access to job opportunities that are available through networking.
## How we built it
The software architecture model being used is the three-tiered architecture, where we are specifically using the MERN Stack. MERN stands for MongoDB, Express, React, Node, after the four key technologies that make up the stack: React(.js) make up the top ( client-side /frontend), Express and Node make up the middle (application/server) tier, and MongoDB makes up the bottom(Database) tier. System Decomposition explains the relationship better below. The software architecture diagram below details the interaction of varying components in the system.
## Challenges we ran into
The mere fact that we didn't have a UX/UI designer on the team made us realize how difficult it was to create an easy-to-navigate user interface.
## Accomplishments that we're proud of
We are proud of the matching algorithm we created to match mentors with mentees based on their educational qualifications, corporate experience, and desired industry. Additionally, we would also be able to monetize the website utilizing the Freemium subscription model we developed if we stream webinar videos using Accedo.
## What's next for Corporate Mentors
1) The creation of a real mentor pool with experienced corporate professionals is the definite next step.
2) Furthermore, the development of the freemium model (4 hrs of mentoring every month) @ $60 per 6 months or $100 per 12 months.
3) Paid Webinars (price determined by the mentor with 80% going to them) and 20% taken as platform maintenance fee.
4) Create a chat functionality between mentor and mentee using Socket.io and add authorization in the website to limit access to the chats from external parties
5) Create an area for the mentor and mentee to store and share files
|
losing
|
## Inspiration
One In every 250 people suffer from cerebral palsy, where the affected person cannot move a limb properly, And thus require constant care throughout their lifetimes. To ease their way of living, we have made this project, 'para-pal'.
The inspiration for this idea was blended with a number of research papers and a project called Pupil which used permutations to make communication possible with eye movements.
## What it does

**"What if Eyes can Speak? Yesss - you heard it right!"**
Para-pal is a novel idea that tracks patterns in the eye movement of the patient and then converts into actual speech. We use the state-of-the-art iris recognition (dlib) to accurately track the eye movements to figure out the the pattern. Our solution is sustainable and very cheap to build and setup. Uses QR codes to connect the caretaker and the patient's app.
We enable paralyzed patients to **navigate across the screen using their eye movements**. They can select an action by placing the cursor for more than 3 seconds or alternatively, they can **blink three times to select the particular action**. A help request is immediately sent to the mobile application of the care taker as a **push notification**
## How we built it
We've embraced flutter in our frontend to make the UI - simple, intuitive with modularity and customisabilty. The image processing and live-feed detection are done on a separate child python process. The iris-recognition at it's core uses dlib and pipe the output to opencv.
We've developed a desktop-app (which is cross-platform with a rpi3 as well)for the patient and a mobile app for the caretaker.
We also tried running our desktop application on Raspberry Pi using an old laptop screen. In the future, we wish to make a dedicated hardware which can be cost-efficient for patients with paralysis.


## Challenges we ran into
Building up the dlib took a significant amount of time, because there were no binaries/wheels and we had to build from source. Integrating features to enable connectivity and sessions between the caretaker's mobile and the desktop app was hard. Fine tuning some parameters of the ML model, preprocessing and cleaning the input was a real challenge.
Since we were from a different time zone, it was challenging to stay awake throughout the 36 hours and make this project!
## Accomplishments that we're proud of
* An actual working application in such a short time span.
* Integrating additional hardware of a tablet for better camera accuracy.
* Decoding the input feed with a very good accuracy.
* Making a successful submission for HackPrinceton.
* Team work :)
## What we learned
* It is always better to use a pre-trained model than making one yourself, because of the significant accuracy difference.
* QR scanning is complex and is harder to integrate in flutter than how it looks on the outside.
* Rather than over-engineering a flutter component, search if a library exists that does exactly what is needed.
## What's next for Para Pal - What if your eyes can speak?
* More easier prefix-less code patterns for the patient using an algorithm like huffman coding.
* More advanced controls using ML that tracks and learns the patient's regular inputs to the app.
* Better analytics to the care-taker.
* More UI colored themes.
|
During the COVID-19 pandemic, time spent at home, time spent not exercising, and time spent alone has been at an all time high. This is why, we decided to introduce FITNER to the other fitness nerds like ourselves who struggle to find others to participate in exercise with. As we all know that it is easier to stay healthy, and happy with friends.
We created Fitner as a way to help you find friends to go hiking with, play tennis or even go bowling with! It can be difficult to practice the sport that you love when none of your existing friends are interested, and you do not have the time commitment to join a club. Fitner solves this issue by bridging the gap between fitness nerds who want to reach their potential but don't have the community to do so.
Fitner is a mobile application built with React Native for an iOS and Android front-end, and Google Cloud / Firebase as the backend. We were inspired by the opportunity to use Google Cloud platforms in our application, so we decided to do something we had never done before, which was real-time communication. Although it was our first time working with real-time communication, we found ourselves, in real-time, overtaking the challenges that came along with it. We are very proud of our work ethic, our resulting application and dedication to our first ever hackathon.
Future implementations of our application can include public chat rooms that users may join and plan public sporting events with, and a more sophisticated algorithm which would suggest members of the community that are at a similar skill and fitness goals as you. With FITNER, your fitness goals will be met easily and smoothly and you will meet lifelong friends on the way!
|
## What it does
Blink is a communication tool for those who cannot speak or move, while being significantly more affordable and accurate than current technologies on the market. [The ALS Association](http://www.alsa.org/als-care/augmentative-communication/communication-guide.html) recommends a $10,000 communication device to solve this problem—but Blink costs less than $20 to build.
You communicate using Blink through a modified version of **Morse code**. Blink out letters and characters to spell out words, and in real time from any device, your caretakers can see what you need. No complicated EEG pads or camera setup—just a small, unobtrusive sensor can be placed to read blinks!
The Blink service integrates with [GIPHY](https://giphy.com) for GIF search, [Earth Networks API](https://www.earthnetworks.com) for weather data, and [News API](https://newsapi.org) for news.
## Inspiration
Our inspiration for this project came from [a paper](http://www.wearabletechnologyinsights.com/articles/11443/powering-devices-through-blinking) published on an accurate method of detecting blinks, but it uses complicated, expensive, and less-accurate hardware like cameras—so we made our own **accurate, low-cost blink detector**.
## How we built it
The backend consists of the sensor and a Python server. We used a capacitive touch sensor on a custom 3D-printed mounting arm to detect blinks. This hardware interfaces with an Arduino, which sends the data to a Python/Flask backend, where the blink durations are converted to Morse code and then matched to English characters.
The frontend is written in React with [Next.js](https://github.com/zeit/next.js) and [`styled-components`](https://styled-components.com). In real time, it fetches data from the backend and renders the in-progress character and characters recorded. You can pull up this web app from multiple devices—like an iPad in the patient’s lap, and the caretaker’s phone. The page also displays weather, news, and GIFs for easy access.
**Live demo: [blink.now.sh](https://blink.now.sh)**
## Challenges we ran into
One of the biggest technical challenges building Blink was decoding blink durations into short and long blinks, then Morse code sequences, then standard characters. Without any libraries, we created our own real-time decoding process of Morse code from scratch.
Another challenge was physically mounting the sensor in a way that would be secure but easy to place. We settled on using a hat with our own 3D-printed mounting arm to hold the sensor. We iterated on several designs for the arm and methods for connecting the wires to the sensor (such as aluminum foil).
## Accomplishments that we're proud of
The main point of PennApps is to **build a better future**, and we are proud of the fact that we solved a real-world problem applicable to a lot of people who aren't able to communicate.
## What we learned
Through rapid prototyping, we learned to tackle difficult problems with new ways of thinking. We learned how to efficiently work in a group with limited resources and several moving parts (hardware, a backend server, a frontend website), and were able to get a working prototype ready quickly.
## What's next for Blink
In the future, we want to simplify the physical installation, streamline the hardware, and allow multiple users and login on the website. Instead of using an Arduino and breadboard, we want to create glasses that would provide a less obtrusive mounting method. In essence, we want to perfect the design so it can easily be used anywhere.
Thank you!
|
winning
|
## Inspiration
We were inspired by a shared personal experience.
## What it does
Allows tracking of slack user sentiment while keeping personal data safe.
## How I built it
We built Mediator with a flask backend to handle rest endpoints and write to the database. Azure to process sentiment analysis. Slack was used to send POST requests to our backend upon events. Oh and docker.
## Challenges I ran into
Unfortunately, ran out of time to integrate Microsoft teams.
## Accomplishments that I'm proud of
Made a fully working product in the 36 hours with only 2 people and 30 minutes of sleep.
## What I learned
Azure and sentiment analysis.
## What's next for Mediator
Implement Microsoft Teams
Add more ways to visualize data
Add a more dynamic way to access data from slack
|
## Inspiration
The team was inspired by the Twitter mood but wanted to make it more powerful.
## What it does
Our product allows users to specify topics of interest then we analyze the popularity, overall sentiment, and compare related topics.
## How we built it
We began by defining the separations between the various components. Then we set off to work on our respective components.
## Challenges we ran into
Performance of the natural language processing tools we're initially unusable. However, we were able to optimize its performance using several clever tricks.
Fitting the various components together was a real challenge, due to several necessary tools being implemented in different programming languages. However, the team overcame it by interprocess communication.
## Accomplishments that we're proud of
Delivering a well polished front end experience on top of a powerful backend.
## What we learned
The team learned d3.js as well as the twitter API. The team learned the core concepts of natural language processing.
## What's next for Open Opinion
Further performance optimizations through custom natural language processing models.
|
## A bit about our thought process...
If you're like us, you might spend over 4 hours a day watching *Tiktok* or just browsing *Instagram*. After such a bender you generally feel pretty useless or even pretty sad as you can see everyone having so much fun while you have just been on your own.
That's why we came up with a healthy social media network, where you directly interact with other people that are going through similar problems as you so you can work together. Not only the network itself comes with tools to cultivate healthy relationships, from **sentiment analysis** to **detailed data visualization** of how much time you spend and how many people you talk to!
## What does it even do
It starts simply by pressing a button, we use **Google OATH** to take your username, email, and image. From that, we create a webpage for each user with spots for detailed analytics on how you speak to others. From there you have two options:
**1)** You can join private discussions based around the mood that you're currently in, here you can interact completely as yourself as it is anonymous. As well if you don't like the person they dont have any way of contacting you and you can just refresh away!
**2)** You can join group discussions about hobbies that you might have and meet interesting people that you can then send private messages too! All the discussions are also being supervised to make sure that no one is being picked on using our machine learning algorithms
## The Fun Part
Here's the fun part. The backend was a combination of **Node**, **Firebase**, **Fetch** and **Socket.io**. The ML model was hosted on **Node**, and was passed into **Socket.io**. Through over 700 lines of **Javascript** code, we were able to create multiple chat rooms and lots of different analytics.
One thing that was really annoying was storing data on both the **Firebase** and locally on **Node Js** so that we could do analytics while also sending messages at a fast rate!
There are tons of other things that we did, but as you can tell my **handwriting sucks....** So please instead watch the youtube video that we created!
## What we learned
We learned how important and powerful social communication can be. We realized that being able to talk to others, especially under a tough time during a pandemic, can make a huge positive social impact on both ourselves and others. Even when check-in with the team, we felt much better knowing that there is someone to support us. We hope to provide the same key values in Companion!
|
losing
|
We did it! We completed our app for the #nwHacks 2023 hackathon! After downing several cans of red bull, we’re proud to present our app, Slingo!
Slingo is a sign language detection app that helps deaf and mute children communicate with other people. Use Slingo to read sign language using the camera or type your message to get a demo of how to say it in American Sign Language. You can even translate the message into different languages!
I want to give a huge shoutout to Elmer Jr. Balbin, Saurab Sen, and Claire Simbulan because this wouldn’t have been possible without them. Moreover, I want to thank Javier Pérez for coming to support us!
The backend was built with NodeJS; hand pattern detection in Python and Tensorflow; translation with DeepL; data collection with Cheerio; and the frontend using React MUI.
|
## What it does
What our project does is introduce ASL letters, words, and numbers to you in a flashcard manner.
## How we built it
We built our project with React, Vite, and TensorFlowJS.
## Challenges we ran into
Some challenges we ran into included issues with git commits and merging. Over the course of our project we made mistakes while resolving merge conflicts which resulted in a large part of our project being almost discarded. Luckily we were able to git revert back to the correct version but time was misused regardless. With our TensorFlow model we had trouble reading the input/output and getting webcam working.
## Accomplishments that we're proud of
We are proud of the work we got done in the time frame of this hackathon with our skill level. Out of the workshops we attended I think we learned a lot and can't wait to implement them in future projects!
## What we learned
Over the course of this hackathon we learned that it is important to clearly define our project scope ahead of time. We spent a lot of our time on day 1 thinking about what we could do with the sponsor technologies and should have looked into them more in depth before the hackathon.
## What's next for Vision Talks
We would like to train our own ASL image detection model so that people can practice at home in real time. Additionally we would like to transcribe their signs into plaintext and voice so that they can confirm what they are signing. Expanding our project scope beyond ASL to other languages is also something we wish to do.
|
## Inspiration:
```
Sound is a precious thing. Unfortunately, some people are unable to experience as a result of hearing loss or of being hearing impaired. Although, we firmly believe that communication should be as simple as a flick of the wrist and we aim to bring simplicity and ease to those affected by hearing loss and impairment.
```
## What it does:
The HYO utilizes hand gesture input and relays it to an Android powered mobile device/ or PC. The unique set of gestures allow a user to select a desired phrase or sentence to communicate with someone using voice over.
## How we built it:
HYO = Hello + Myo
We used the C++ programming language to write the code for the PC to connect to the MYO and recognize the gestures in a multilevel menu and output instructions. We developed the idea to create and deploy an Android app for portability and ease of use.
## Challenges we ran into:
We encountered several challenges along the way while building our project. We spent large amounts of time troubleshooting code issues. The HYO was programmed using the C++ language which is complex in its concepts.
After long hours of continuous programming and troubleshooting, we were able to run the code and connect the MYO to the computer.
## Accomplishments that we're proud of:
1) Not sleeping and working productively.
2) Working with in diverse group of four who are four complete strangers and being able to collaborate with them and work towards a single goal of success despite belonging to different programs and having completely different skill sets.
## What we learned:
1) Github is an extremely valuable tool.
2) Learnt new concepts in C++
3) Experience working with the MYO armband and Arduino and Edison micro-controllers.
4) How to build an Android app
5) How to host a website
## What's next for HYO WORLD
HYO should be optimized to fit criteria for the average technology consumer. Hand gestures can be implemented to control apps via the MYO armband, a useful and complex piece of technology that can be programmed to recognize various gestures and convert them into instructions to be executed.
|
losing
|
## Inspiration
Our team was determined to challenge a major problem in society, and create a practical solution. It occurred to us early on that **false facts** and **fake news** has become a growing problem, due to the availability of information over common forms of social media. Many initiatives and campaigns recently have used approaches, such as ML fact checkers, to identify and remove fake news across the Internet. Although we have seen this approach become evidently better over time, our group felt that there must be a way to innovate upon the foundations created from the ML.
In short, our aspirations to challenge an ever-growing issue within society, coupled with the thought of innovating upon current technological approaches to the solution, truly inspired what has become ETHentic.
## What it does
ETHentic is a **betting platform** with a twist. Rather than preying on luck, you play against the odds of truth and justice. Users are given random snippets of journalism and articles to review, and must determine whether the information presented within the article is false/fake news, or whether it is legitimate and truthful, **based on logical reasoning and honesty**.
Users must initially trade in Ether for a set number of tokens (0.30ETH = 100 tokens). One token can be used to review one article. Every article that is chosen from the Internet is first evaluated using an ML model, which determines whether the article is truthful or false. For a user to *win* the bet, they must evaluate the same choice as the ML model. By winning the bet, a user will receive a $0.40 gain on bet. This means a player is very capable of making a return on investment in the long run.
Any given article will only be reviewed 100 times by any unique user. Once the 100 cap has been met, the article will retire, and the results will be published to the Ethereum blockchain. The results will include anonymous statistics of ratio of truth:false evaluation, the article source, and the ML's original evaluation. This data is public, immutable, and has a number of advantages. All results going forward will be capable of improving the ML model's ability to recognize false information, by comparing the relationship of assessment to public review, and training the model in a cost-effective, open source method.
To summarize, ETHentic is an incentivized, fun way to educate the public about recognizing fake news across social media, while improving the ability of current ML technology to recognize such information. We are improving the two current best approaches to beating fake news manipulation, by educating the public, and improving technology capabilities.
## How we built it
ETHentic uses a multitude of tools and software to make the application possible. First, we drew out our task flow. After sketching wireframes, we designed a prototype in Framer X. We conducted informal user research to inform our UI decisions, and built the frontend with React.
We used **Blockstack** Gaia to store user metadata, such as user authentication, betting history, token balance, and Ethereum wallet ID in a decentralized manner. We then used MongoDB and Mongoose to create a DB of articles and a counter for the amount of people who have viewed any given article. Once an article is added, we currently outsourced to Google's fact checker ML API to generate a true/false value. This was added to the associated article in Mongo **temporarily**.
Users who wanted to purchase tokens would receive a Metamask request, which would process an Ether transfer to an admin wallet that handles all the money in/money out. Once the payment is received, our node server would update the Blockstack user file with the correct amount of tokens.
Users who perform betting receive instant results on whether they were correct or wrong, and are prompted to accept their winnings from Metamask.
Everytime the Mongo DB updates the counter, it checks if the count = 100. Upon an article reaching a count of 100, the article is removed from the DB and will no longer appear on the betting game. The ML's initial evaluation, the user results, and the source for the article are all published permanently onto an Ethereum blockchain. We used IPFS to create a hash that linked to this information, which meant that the cost for storing this data onto the blockchain was massively decreased. We used Infuria as a way to get access to IPFS without needing a more heavy package and library. Storing on the blockchain allows for easy access to useful data that can be used in the future to train ML models at a rate that matches the user base growth.
As for our brand concept, we used a green colour that reminded us of Ethereum Classic. Our logo is Lady Justice - she's blindfolded, holding a sword in one hand and a scale in the other. Her sword was created as a tribute to the Ethereum logo. We felt that Lady Justice was a good representation of what our project meant, because it gives users the power to be the judge of the content they view, equipping them with a sword and a scale. Our marketing website, ethergiveawayclaimnow.online, is a play on "false advertising" and not believing everything you see online, since we're not actually giving away Ether (sorry!). We thought this would be an interesting way to attract users.
## Challenges we ran into
Figuring out how to use and integrate new technologies such as Blockstack, Ethereum, etc., was the biggest challenge. Some of the documentation was also hard to follow, and because of the libraries being a little unstable/buggy, we were facing a lot of new errors and problems.
## Accomplishments that we're proud of
We are really proud of managing to create such an interesting, fun, yet practical potential solution to such a pressing issue. Overcoming the errors and bugs with little well documented resources, although frustrating at times, was another good experience.
## What we learned
We think this hack made us learn two main things:
1) Blockchain is more than just a cryptocurrency tool.
2) Sometimes even the most dubious subject areas can be made interesting.
The whole fake news problem is something that has traditionally been taken very seriously. We took the issue as an opportunity to create a solution through a different approach, which really stressed the lesson of thinking and viewing things in a multitude of perspectives.
## What's next for ETHentic
ETHentic is looking forward to the potential of continuing to develop the ML portion of the project, and making it available on test networks for others to use and play around with.
|
## Inspiration
As more and more blockchains transition to using Proof of Stake as their primary consensus mechanism, the importance of validators becomes more apparent. The security of entire digital economies, people's assets, and global currencies rely on the security of the chain, which at its core is guaranteed by the number of tokens that are staked by validators. These staked tokens not only come from validators but also from everyday users of the network. In the current system there is very little distinguishing between validators other than the APY that each provides and their name (a.k.a. their brand). We aim to solve this issue with Ptolemy by creating a reputation score that is tied to a validator's DID using data found both on and off chain.
This pain point was discovered as our club, being validators on many chains such as Evmos, wanted a way to earn more delegations through putting in more effort into pushing the community forward. After talking with other university blockchain clubs, we discovered that the space was seriously lacking the UI and data aggregation processes to correlate delegations with engagement and involvement in a community.
We confirmed this issue by realizing our shared experiences as users of these protocols: when deciding which validators to delegate our tokens to on Osmosis we really had no way of choosing between validators other than judging based on APY looking them up on Twitter to see what they did for the community.
## What it does
Ptolemy calculates a reputation score based on a number of factors and ties this score to validators on chain using Sonr's DID module. These factors include both on-chain and off-chain metrics. We fetch on-chain validator data Cosmoscan and assign each validator a reputation score based on number of blocks proposed, governance votes, amount of delegators, and voting power, and create and evaluate a Validator based on a mathematical formula that normalized data gives them a score between 0-5. Our project includes not only the equation to arrive at this score but also a web app to showcase what a delegation UI would look like when including this reputation score. We also include mock data that ties data from social media platforms to highlight a validator's engagement with the community, such as Reddit, Twitter, and Discord, although this carries less weight than other factors.
## How we built it
First, we started with a design doc, laying out all the features. Next, we built out the design in Figma, looking at different Defi protocols for inspiration. Then we started coding.
We built it using Sonr as our management system for DIDs, React, and Chakra for the front end, and the backend in GoLang.
## Challenges we ran into
Integrating the Sonr API was quite difficult, we had to hop on call with an Engineer from the team to work through the bug. We ended up having to use the GoLang API instead of the Flutr SDK. During the ideating phase, we had to figure out what off-chain data was useful for choosing between validators.
## Accomplishments that we're proud of
We are proud of learning a new technology stack from the ground up in the form of the Sonr DID system and integrating it into a much-needed application in the blockchain space. We are also proud of the fact that we focused on deeply understanding the validator reputation issue so that our solution would be comprehensive in its coverage.
## What we learned
We learned how to bring together diverse areas of software to build a product that requires so many different moving components. We also learned how to look through many sets of documentation and learn what we minimally needed to hack out what we wanted to build within the time frame. Lastly, we also learned to efficiently bring together these different components in one final product that justice to each of their individual complexities.
## What's next for Ptolemy
Ptolemy is named in honor of the eponymous 2nd Century scientist who generated a system to chart the world in the form of longitude/latitude which illuminated the geography world. In a similar way, we hope to bring more light to the decision making process of directing delegations. Beyond this hackathon, we want to include more important metrics such as validator downtime, jail time, slashing history, and history of APY over a certain time period. Given more time, we could have fetched this data from an indexing service similar to The Graph. We also want to flesh out the onboarding process for validators to include signing into different social media platforms so we can fetch data to determine their engagement with communities, rather than using mock data. A huge feature for the app that we didn't have time to build out was staking directly on our platform, which would have involved an integration with Keplr wallet and the staking contracts on each of the appchains that we chose.
Besides these staking related features, we also had many ideas to make the reputation score a bigger component of everyone's on chain identity. The idea of a reputation score has huge network effects in the sense that as more users and protocols use it, the more significance it holds. Imagine a future where lending protocols, DEXes, liquidity mining programs, etc. all take into account your on-chain reputation score to further align incentives by rewarding good actors and slashing malicious ones. As more protocols integrate it, the more power it holds and the more seriously users will manage their reputation score. Beyond this, we want to build out an API that also allows developers to integrate our score into their own decentralized apps.
All this is to work towards a future where Ptolemy will fully encapsulate the power of DID’s in order to create a more transparent world for users that are delegating their tokens.
Before launch, we need to stream in data from Twitter, Reddit, and Discord, rather than using mock data. We will also allow users to directly stake our platform. Then we need to integrate with different lending platforms to generate the Validator's "reputation-score" on-chain. Then we will launch on test net. Right now, we have the top 20 validators, moving forwards we will add more validators. We want to query, jail time, and slashing of validators in order to create a more comprehensive reputation score for the validator., Off-chain, we want to aggregate Discord, Reddit, Twitter, and community forum posts to see their contributions to the chain they are validating on. We also want to create an API that allows developers to use this aggregated data on their platform.
|
## Inspiration
The transit system is lagging behind, and we pay a flat fee no matter the distance we travel.
## What it does
The Align Transit App revolutionizes the transit system by calculating the fee based on the distance travelled.
## How we built it
We built it using React native, Google directions API, Google maps API, and Expo CLI.
## Challenges we ran into
We felt that we did not have enough time to finish this project with all of our features in mind. We are also new to the language.
## Accomplishments that we're proud of
We were able to complete some part of the project.
## What we learned
We learned how to use Javascript, React native, and teamwork.
## What's next for Align Transit App
To finish payment system, to have a feature for police to check payment, allow the app to give the bus drivers the stops requested by all passengers.
|
winning
|
## Inspiration
Deep learning as a tool can have a huge impact in the media industry. With visual and audio content taking over 70% of the internet, people often struggle with navigating through un-organized media content. Our goal is to transform cluttered video content into a simplified and streamlined navigating experience.
## What it does
Gaze organizes un-organized videos to help the user interpret and navigate visual information. It allows the user to 1) search video content by specific inputted text queries, 2) search educational lectures by locating timestamps for corresponding lecture slides 3) search audio content and subtitles by specific inputted text queries.
## How We Built It
First, we split the video into smaller scenes by creating an average brightness histogram and calculating entropy. Scenes were distinguished and determined by substantially different histogram and entropy results. Once scenes were separated, we sent two frames from each scene to Microsoft's Cognitive services to get a highly contextual description of the scene. Iterating through each description and clustering keywords into a giant bucket, we were able to provide the users a platform to navigate through a media content with accuracy and ease.
## Challenges I ran into
We were limited to a low-end CPU from Azure, so video processing time took around 50ms per frame, with around 90,000 frames per clip.
## What's next for Gaze
Lots of sleep and tourism.
|
## Inspiration
Ever sit through a long and excruciating video like a lecture or documentary? Is 2x speed too slow for youtube? TL;DW
## What it does
Just put in the link to the YouTube video you are watching, then wait as our Revlo and NLTK powered backend does natural language processing to give you the GIFs from GIPHY that best reflect the video!
## How I built it
The webapp takes in a link to a youtube video. We download the youtube video with pytube and convert the video into audio mp3 with ffmpeg. We upload the audio to Revspeech API to transcribe the video. Then, we used NLTK (natural language toolkit) for python in order to process the text. We first perform "part of speech" tagging and frequency detection of different words in order to identify key words in the video. In addition, we we identify key words from the title of the video. We pool these key words together in order to search for gifs on GIPHY. We then return these results on the React/Redux frontend of our app.
## Challenges I ran into
We experimented with different NLP algorithms to extract key words to search for gifs. One of which was RAKE keyword extraction. However, the algorithm relied on identifying uncommonly occurring words in the text, which did not line up well in finding relevant gifs.
tf-idf also did not work as well for our task because we had one document from the transcript rather than a library.
## Accomplishments that I'm proud of
We are proud of accomplishing the goal we set out to do. We were able to independently create different parts of the backend and frontend (NLP, flask server, and react/redux) and unify them together in the project.
## What I learned
We learned a lot about natural language processing and the applications it has with video. From the Rev API, we learned about how to handle large file transfer through multipart form data and to interface with API jobs.
## What's next for TLDW
Summarizing into 7 gifs (just kidding). We've discussed some of the limitations and bottlenecks of our app with the Rev team, who have told us about a faster API or a streaming API. This would be very useful to reduce wait times because our use case does not need to prioritize accuracy so much. We're also looking into a ranking system for sourced GIFs to provide funnier, more specific GIFs.
|
## Inspiration
With remote learning seeming to be the norm for a significant period of time many students are finding the transitions difficult. Particularly that consuming large amounts of online content through hour long videos and online textbooks isn't the most engaging or effective form of learning. We wanted to build something that helped students learn in a more interactive and efficient manner. Aiming to promote conceptual understanding rather than brute memorization
## What it does
Aitomind (Auto+AI generated mindmaps) is web application that transcribes the speech of a video and organizes it into a mind map structure. Users upload a video of their choice and a couple minutes later a mind map containing the key concepts of the video is generated. This helps the user understand the structure of the video/lesson as well understand the relation between key ideas. Most importantly, each concept will have a timestamp so that the user can easily navigate to the part of the video where the concept was discussed.
## How We built it
The core of Aitomand is the natural language processing algorithm that transcribes text from videos and analyzes it to create a mindmap. This was made up of several azure services including: azure text-analytics, text-to-speech as well as the azure machine learning platform to implement our own models. We used azure text-to-speech to transcribe the text form the video, then used azure text-analytics to do key word and entity analysis. From there we used our own machine learning model, which is trained on a variety of academic datasets using a word to vector model. This is all ran on an express server and written in node.js. The frontend was built using react and styled with the bulma css library
## Challenges I ran into
While developing the word2vec model, the data mining process was especially challenging. Since there was no Natural Language Processing dataset made for academic keywords, I had to collect data from a variety of dictionary sources across multiple subjects. As well, getting used to programming in asynchronous javascript and writing a full stack application for the first time was very difficult to say the least
## Accomplishments that I'm proud of
Implementing azure services in our project, especially our own ML model
Writing a polished full stack web application for the first time
## What We learned
How to write a full stack web application
How to deploy custom models on azure
Using asynchronous JavaScript and REST apis
## What's next for Aitomind
We aim to further tune our word2vec model with more datasets to improve its accuracy on detecting related keywords. Moreover, we will look into video upload speed as well as speech-to-text transcription speed as right now it currently roughly half the length of the video. As well, we would like a database for users to store and retrieve their own mindmaps
|
partial
|
# butternut
## `buh·tr·nuht` -- `bot or not?`
Is what you're reading online written by a human, or AI? Do the facts hold up? `butternut`is a chrome extension that leverages state-of-the-art text generation models *to combat* state-of-the-art text generation.
## Inspiration
Misinformation spreads like wildfire in these days and it is only aggravated by AI-generated text and articles. We wanted to help fight back.
## What it does
Butternut is a chrome extension that analyzes text to determine just how likely a given article is AI-generated.
## How to install
1. Clone this repository.
2. Open your Chrome Extensions
3. Drag the `src` folder into the extensions page.
## Usage
1. Open a webpage or a news article you are interested in.
2. Select a piece of text you are interested in.
3. Navigate to the Butternut extension and click on it.
3.1 The text should be auto copied into the input area.
(you could also manually copy and paste text there)
3.2 Click on "Analyze".
4. After a brief delay, the result will show up.
5. Click on "More Details" for further analysis and breakdown of the text.
6. "Search More Articles" will do a quick google search of the pasted text.
## How it works
Butternut is built off the GLTR paper <https://arxiv.org/abs/1906.04043>. It takes any text input and then finds out what a text generating model *would've* predicted at each word/token. This array of every single possible prediction and their related probability is crossreferenced with the input text to determine the 'rank' of each token in the text: where on the list of possible predictions was the token in the text.
Text with consistently high ranks are more likely to be AI-generated because current AI-generated text models all work by selecting words/tokens that have the highest probability given the words before it. On the other hand, human-written text tends to have more variety.
Here are some screenshots of butternut in action with some different texts. Green highlighting means predictable while yellow and red mean unlikely and more unlikely, respectively.
Example of human-generated text:

Example of GPT text:

This was all wrapped up in a simple Flask API for use in a chrome extension.
For more details on how GLTR works please check out their paper. It's a good read. <https://arxiv.org/abs/1906.04043>
## Tech Stack Choices
Two backends are defined in the [butternut backend repo](https://github.com/btrnt/butternut_backend). The salesforce CTRL model is used for butternut.
1. GPT-2: GPT-2 is a well-known general purpose text generation model and is included in the GLTR team's [demo repo](https://github.com/HendrikStrobelt/detecting-fake-text)
2. Salesforce CTRL: [Salesforce CTRL](https://github.com/salesforce/ctrl) (1.6 billion parameter) is bigger than all GPT-2 varients (117 million - 1.5 billion parameters) and is purpose-built for data generation. A custom backend was
CTRL was selected for this project because it is trained on an especially large dataset meaning that it has a larger knowledge base to draw from to discriminate between AI and human -written texts. This, combined with its greater complexity, enables butternut to stay a step ahead of AI text generators.
## Design Decisions
* Used approchable soft colours to create a warm approach towards news and data
* Used colour legend to assist users in interpreting language
## Challenges we ran into
* Deciding how to best represent the data
* How to design a good interface that *invites* people to fact check instead of being scared of it
* How to best calculate the overall score given a tricky rank distrubution
## Accomplishments that we're proud of
* Making stuff accessible: implementing a paper in such a way to make it useful **in under 24 hours!**
## What we learned
* Using CTRL
* How simple it is to make an API with Flask
* How to make a chrome extension
* Lots about NLP!
## What's next?
Butternut may be extended to improve on it's fact-checking abilities
* Text sentiment analysis for fact checking
* Updated backends with more powerful text prediction models
* Perspective analysis & showing other perspectives on the same topic
Made with care by:

```
// our team:
{
'group_member_0': [brian chen](https://github.com/ihasdapie),
'group_member_1': [trung bui](https://github.com/imqt),
'group_member_2': [vivian wi](https://github.com/vvnwu),
'group_member_3': [hans sy](https://github.com/hanssy130)
}
```
Github links:
[butternut frontend](https://github.com/btrnt/butternut)
[butternut backend](https://github.com/btrnt/butternut_backend)
|
## Inspiration
As someone who has always wanted to speak in ASL (American Sign Language), I have always struggled with practicing my gestures, as I, unfortunately, don't know any ASL speakers to try and have a conversation with. Learning ASL is an amazing way to foster an inclusive community, for those who are hearing impaired or deaf. DuoASL is the solution for practicing ASL for those who want to verify their correctness!
## What it does
DuoASL is a learning app, where users can sign in to their respective accounts, and learn/practice their ASL gestures through a series of levels.
Each level has a *"Learn"* section, with a short video on how to do the gesture (ie 'hello', 'goodbye'), and a *"Practice"* section, where the user can use their camera to record themselves performing the gesture. This recording is sent to the backend server, where it is validated with our Action Recognition neural network to determine if you did the gesture correctly!
## How we built it
DuoASL is built up of two separate components;
**Frontend** - The Frontend was built using Next.js (React framework), Tailwind and Typescript. It handles the entire UI, as well as video collection during the *"Learn"* Section, which it uploads to the backend
**Backend** - The Backend was built using Flask, Python, Jupyter Notebook and TensorFlow. It is run as a Flask server that communicates with the front end and stores the uploaded video. Once a video has been uploaded, the server runs the Jupyter Notebook containing the Action Recognition neural network, which uses OpenCV and Tensorflow to apply the model to the video and determine the most prevalent ASL gesture. It saves this output to an array, which the Flask server reads and responds to the front end.
## Challenges we ran into
As this was our first time using a neural network and computer vision, it took a lot of trial and error to determine which actions should be detected using OpenCV, and how the landmarks from the MediaPipe Holistic (which was used to track the hands and face) should be converted into formatted data for the TensorFlow model. We, unfortunately, ran into a very specific and undocumented bug with using Python to run Jupyter Notebooks that import Tensorflow, specifically on M1 Macs. I spent a short amount of time (6 hours :) ) trying to fix it before giving up and switching the system to a different computer.
## Accomplishments that we're proud of
We are proud of how quickly we were able to get most components of the project working, especially the frontend Next.js web app and the backend Flask server. The neural network and computer vision setup was pretty quickly finished too (excluding the bugs), especially considering how for many of us this was our first time even using machine learning on a project!
## What we learned
We learned how to integrate a Next.js web app with a backend Flask server to upload video files through HTTP requests. We also learned how to use OpenCV and MediaPipe Holistic to track a person's face, hands, and pose through a camera feed. Finally, we learned how to collect videos and convert them into data to train and apply an Action Detection network built using TensorFlow
## What's next for DuoASL
We would like to:
* Integrate video feedback, that provides detailed steps on how to improve (using an LLM?)
* Add more words to our model!
* Create a practice section that lets you form sentences!
* Integrate full mobile support with a PWA!
|
## Inspiration
In the current media landscape, control over distribution has become almost as important as the actual creation of content, and that has given Facebook a huge amount of power. The impact that Facebook newsfeed has in the formation of opinions in the real world is so huge that it potentially affected the 2016 election decisions, however these newsfeed were not completely accurate. Our solution? FiB because With 1.5 Billion Users, Every Single Tweak in an Algorithm Can Make a Change, and we dont stop at just one.
## What it does
Our algorithm is two fold, as follows:
**Content-consumption**: Our chrome-extension goes through your facebook feed in real time as you browse it and verifies the authenticity of posts. These posts can be status updates, images or links. Our backend AI checks the facts within these posts and verifies them using image recognition, keyword extraction, and source verification and a twitter search to verify if a screenshot of a twitter update posted is authentic. The posts then are visually tagged on the top right corner in accordance with their trust score. If a post is found to be false, the AI tries to find the truth and shows it to you.
**Content-creation**: Each time a user posts/shares content, our chat bot uses a webhook to get a call. This chat bot then uses the same backend AI as content consumption to determine if the new post by the user contains any unverified information. If so, the user is notified and can choose to either take it down or let it exist.
## How we built it
Our chrome-extension is built using javascript that uses advanced web scraping techniques to extract links, posts, and images. This is then sent to an AI. The AI is a collection of API calls that we collectively process to produce a single "trust" factor. The APIs include Microsoft's cognitive services such as image analysis, text analysis, bing web search, Twitter's search API and Google's Safe Browsing API. The backend is written in Python and hosted on Heroku. The chatbot was built using Facebook's wit.ai
## Challenges we ran into
Web scraping Facebook was one of the earliest challenges we faced. Most DOM elements in Facebook have div ids that constantly change, making them difficult to keep track of. Another challenge was building an AI that knows the difference between a fact and an opinion so that we do not flag opinions as false, since only facts can be false. Lastly, integrating all these different services, in different languages together using a single web server was a huge challenge.
## Accomplishments that we're proud of
All of us were new to Javascript so we all picked up a new language this weekend. We are proud that we could successfully web scrape Facebook which uses a lot of techniques to prevent people from doing so. Finally, the flawless integration we were able to create between these different services really made us feel accomplished.
## What we learned
All concepts used here were new to us. Two people on our time are first-time hackathon-ers and learned completely new technologies in the span of 36hrs. We learned Javascript, Python, flask servers and AI services.
## What's next for FiB
Hopefully this can be better integrated with Facebook and then be adopted by other social media platforms to make sure we stop believing in lies.
|
winning
|
## Inspiration
Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment.
## What it does
Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk.
## How we built it
* Smart Contract for the transfer of funds on the blockchain made using Solidity
* A robust backend and authentication system made using node.js, express.js, and MongoDB.
* Elegant front end made with react-native and Celo's SDK.
## Challenges we ran into
Unfamiliar with the tech stack used to create this project and the BlockChain technology.
## What we learned
We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps.
## What's next for *PoolNVest*
Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds.
Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
|
## Inspiration
In traditional finance, banks often swap cash flows from their assets for a fixed period of time. They do this because they want to hold onto their assets long-term, but believe their counter-party's assets will outperform their own in the short-term. We decided to port this over to DeFi, specifically Uniswap.
## What it does
Our platform allows for the lending and renting of Uniswap v3 liquidity positions. Liquidity providers can lend out their positions for a short amount of time to renters, who are able to collect fees from the position for the duration of the rental. Lenders are able to both hold their positions long term AND receive short term cash flow in the form of a lump sum ETH which is paid upfront by the renter. Our platform handles the listing, selling and transferring of these NFTs, and uses a smart contract to encode the lease agreements.
## How we built it
We used solidity and hardhat to develop and deploy the smart contract to the Rinkeby testnet. The frontend was done using web3.js and Angular.
## Challenges we ran into
It was very difficult to lower our gas fees. We had to condense our smart contract and optimize our backend code for memory efficiency. Debugging was difficult as well, because EVM Error messages are less than clear. In order to test our code, we had to figure out how to deploy our contracts successfully, as well as how to interface with existing contracts on the network. This proved to be very challenging.
## Accomplishments that we're proud of
We are proud that in the end after 16 hours of coding, we created a working application with a functional end-to-end full-stack renting experience. We allow users to connect their MetaMask wallet, list their assets for rent, remove unrented listings, rent assets from others, and collect fees from rented assets. To achieve this, we had to power through many bugs and unclear docs.
## What we learned
We learned that Solidity is very hard. No wonder blockchain developers are in high demand.
## What's next for UniLend
We hope to use funding from the Uniswap grants to accelerate product development and add more features in the future. These features would allow liquidity providers to swap yields from liquidity positions directly in addition to our current model of liquidity for lump-sums of ETH as well as a bidding system where listings can become auctions and lenders rent their liquidity to the highest bidder. We want to add different variable-yield assets to the renting platform. We also want to further optimize our code and increase security so that we can eventually go live on Ethereum Mainnet. We also want to map NFTs to real-world assets and enable the swapping and lending of those assets on our platform.
|
## Inspiration
The cryptocurrency market is an industry which is expanding at an exponential rate. Everyday, thousands new investors of all kinds are getting into this volatile market. With more than 1,500 coins to choose from, it is extremely difficult to choose the wisest investment for those new investors. Our goal is to make it easier for those new investors to select the pearl amongst the sea of cryptocurrency.
## What it does
To directly tackle the challenge of selecting which cryptocurrency to choose, our website has a compare function which can add up to 4 different cryptos. All of the information from the chosen cryptocurrencies are pertinent and displayed in a organized way. We also have a news features for the investors to follow the trendiest news concerning their precious investments. Finally, we have an awesome bot which will answer any questions the user has about cryptocurrency. Our website is simple and elegant to provide a hassle-free user experience.
## How we built it
We started by building a design prototype of our website using Figma. As a result, we had a good idea of our design pattern and Figma provided us some CSS code from the prototype. Our front-end is built with React.js and our back-end with node.js. We used Firebase to host our website. We fetched datas of cryptocurrency from multiple APIs from: CoinMarketCap.com, CryptoCompare.com and NewsApi.org using Axios. Our website is composed of three components: the coin comparison tool, the news feed page, and the chatbot.
## Challenges we ran into
Throughout the hackathon, we ran into many challenges. First, since we had a huge amount of data at our disposal, we had to manipulate them very efficiently to keep a high performant and fast website. Then, there was many bugs we had to solve when integrating Cisco's widget to our code.
## Accomplishments that we're proud of
We are proud that we built a web app with three fully functional features. We worked well as a team and had fun while coding.
## What we learned
We learned to use many new api's including Cisco spark and Nuance nina. Furthermore, to always keep a backup plan when api's are not working in our favor. The distribution of the work was good, overall great team experience.
## What's next for AwsomeHack
* New stats for the crypto compare tools such as the number of twitter, reddit followers. Keeping track of the GitHub commits to provide a level of development activity.
* Sign in, register, portfolio and watchlist .
* Support for desktop applications (Mac/Windows) with electronjs
|
winning
|
## Inspiration
An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills.
## What it does
Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report.
## How we built it
This Interface was built using the MERN stack
In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components.
## Challenges we ran into
* Learning to work with AssemblyAI
* Storing files and sending them over an API
* Managing large amounts of data given from an API
* Organizing the API code structure in a proper way
## Accomplishments that we're proud of
• Creating a streamlined Artificial Intelligence process
• Team perseverance
## What we learned
• Learning to work with AssemblyAI, Express.js
• The hardest solution is not always the best solution
## What's next for Interviewy
• Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback.
• Creating an API for analyzing the video and the gestures of the the interviewees
|
## Inspiration
We were motivated to tackle linguistic challenges in the educational sector after juxtaposing our personal experience with current news.
There are currently over 70 million asylum seekers, refugees, or internally-displaced people around the globe, and this statistic highlights the problem of individuals from different linguistic backgrounds being forced to assimilate into a culture and language different than theirs. As one of our teammates was an individual seeking a new home in a new country, we had first hand perspective at how difficult this transition was. In addition, our other team members had volunteered extensively within the educational system in developing communities, both locally and globally, and saw a similar need with individuals being unable to meet the community’s linguistics standards.
We also iterated upon our idea to ensure that we are holistically supporting our communities by making sure we consider the financial implications of taking the time to refine your language skills instead of working.
## What it does
Fluently’s main purpose is to provide equitable education worldwide. By providing a user customized curriculum and linguistic practice, students can further develop their understanding of their language. It can help students focus on areas where they need the most improvement. This can help them make progress at their own pace and feel more confident in their language skills while also practicing comprehension skills. By using artificial intelligence to analyze pronunciation, our site provides feedback that is both personalized and objective.
## How we built it
Developing the web application was no easy feat.
As we were searching for an AI model to help us through our journey we stumbled upon OpenAI, specifically Microsoft Azure’s cognitive systems that utilize OpenAI’s comprehensive abilities in language processing. This API gave us the ability to analyze voice patterns and fluency and transcribe passages that are mentioned in the application. Figuring out the documentation as well as how the AI will be interacting with the user was most important for us to execute properly since the AI would be acting as the tutor/mentor for the students in these cases. We developed a diagram that would break down the passages read to the student phonetically and give them a score of 100 for how well each word was pronounced based on the API’s internal grading system. As it is our first iteration of the web app, we wanted to explore how much information we could extract from the user to see what is most valuable to display to them in the future.
Integrating the API with the web host was a new feat for us as a young team. We were confident in our python abilities to host the AI services and found a library by the name of Flask that would help us write html and javascript code to help support the front end of the application through python. By using Flask, we were able to host our AI services with python while also continuously managing our front end through python scripts.
This gave room for the development of our backend systems which are Convex and Auth0. Auth0 was utilized to give members coming into the application a unique experience by having them sign into a personalized account. The account is then sent into the Convex database to be used as a storage base for their progress in learning and their development of skills over time. All in all, each component of the application from the AI learning models, generating custom passages for the user, to the backend that communicated between the Javascript and Python server host that streamlines the process of storing user data, came with its own challenges but came together seamlessly as we guide the user from our simple login system to the passage generator and speech analyzer to give the audience constructive feedback on their fluency and pronunciation.
## Challenges we ran into
As a majority beginning team, this was our first time working with many of the different technologies, especially with AI APIs. We need to be patient working with key codes and going through an experiment process of trying different mini tests out to then head to the major goal that we were headed towards. One major issue that we faced was the visualization of data to the user. We found it hard to synthesize the analysis that was done by the AI to translate to the user to make sure they are confident in what they need to improve on. To solve this problem we first sought out how much information we could extract from the AI and then in future iterations we would simply display the output of feedback.
Another issue we ran into was the application of convex into the application. The major difficulty came from developing javascript functions that would communicate back to the python server hosting the site. This was resolved thankfully; we are grateful for the Convex mentors at the conference that helped us develop personalized javascript functions that work seamlessly with our Auth0 authentication and the rest of the application to record users that come and go.
## Accomplishments that we're proud of:
One accomplishment that we are proud of was the implementation of Convex and Auth0 with Flask and Python. As python is a rare language to host web servers in and isn't the primary target language for either service, we managed to piece together a way to fit both services into our project by collaboration with the team at Convex to help us out. This gave way to a strong authentication platform for our web application and for helping us start a database to store user data onto.
Another accomplishment was the transition of using a React Native application to using Flask with Python. As none of the group has seen Flask before or worked for it for that matter, we really had to hone in our abilities to learn on the fly and apply what we knew prior about python to make the web app work with this system.
Additionally, we take pride in our work with OpenAI, specifically Azure. We researched our roadblocks in finding a voice recognition AI to implement our natural language processing vision. We are proud of how we were able to display resilience and conviction to our overall mission for education to use new technology to build a better tool.
## What we learned
As beginners at our first hackathon, not only did we learn about the technical side of building a project, we were also able to hone our teamwork skills as we dove headfirst into a project with individuals we had never worked with before.
As a group, we collectively learned about every aspect of coding a project, from refining our terminal skills to working with unique technology like Microsoft Azure Cognitive Services. We also were able to better our skillset with new cutting edge technologies like Convex and OpenAI.
We were able to come out of this experience not only growing as programmers but also as individuals who are confident they can take on the real world challenges of today to build a better tomorrow.
## What's next?
We hope to continue to build out the natural language processing applications to offer the technology in other languages. In addition, we hope to hone to integrate other educational resources, such as videos or quizzes to continue to build other linguistic and reading skill sets. We would also love to explore the cross section with gaming and natural language processing to see if we can make it a more engaging experience for the user. In addition, we hope to expand the ethical considerations by building a donation platform that allows users to donate money to the developing community and pay forward the generosity to ensure that others are able to benefit from refining their linguistic abilities. The money would then go to a prominent community in need that uses our platform to fund further educational resources in their community.
## Bibliography
United Nations High Commissioner for Refugees. “Global Forced Displacement Tops 70 Million.” UNHCR, UNHCR, The UN Refugee Agency, <https://www.unhcr.org/en-us/news/stories/2019/6/5d08b6614/global-forced-displacement-tops-70-million.html>.
|
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
|
partial
|
## Inspiration
Everyone gets tired waiting for their large downloads to complete. BitTorrent is awesome, but you may not have a bunch of peers ready to seed it. Fastify, a download accelerator as a service, solves both these problems and regularly enables 4x download speeds.
## What it does
The service accepts a URL and spits out a `.torrent` file. This `.torrent` file allows you to tap into Fastify's speedy seed servers for your download.
We even cache some downloads so popular downloads will be able to be pulled from Fastify even speedier!
Without any cache hits, we saw the following improvements in download speeds with our test files:
```
| | 512Mb | 1Gb | 2Gb | 5Gb |
|-------------------|----------|--------|---------|---------|
| Regular Download | 3 mins | 7 mins | 13 mins | 30 mins |
| Fastify | 1.5 mins | 3 mins | 5 mins | 9 mins |
|-------------------|----------|--------|---------|---------|
| Effective Speedup | 2x | 2.33x | 2.6x | 3.3x |
```
*test was performed with slices of the ubuntu 16.04 iso file, on the eduroam network*
## How we built it
Created an AWS cluster and began writing Go code to accept requests and the front-end to send them. Over time we added more workers to the AWS cluster and improved the front-end. Also, we generously received some well-needed Vitamin Water.
## Challenges we ran into
The BitTorrent protocol and architecture was more complicated for seeding than we thought. We were able to create `.torrent` files that enabled downloads on some BitTorrent clients but not others.
Also, our "buddy" (*\*cough\** James *\*cough\**) ditched our team, so we were down to only 2 people off the bat.
## Accomplishments that we're proud of
We're able to accelerate large downloads by 2-5 times as fast as the regular download. That's only with a cluster of 4 computers.
## What we learned
Bittorrent is tricky. James can't be trusted.
## What's next for Fastify
More servers on the cluster. Demo soon too.
|
## Inspiration
You use Apple Music. Your friends all use Spotify. But you're all stuck in a car together on the way to Tahoe and have the perfect song to add to the road trip playlist. With TrainTrax, you can all add songs to the same playlist without passing the streaming device around or hassling with aux cords.
Have you ever been out with friends on a road trip or at a party and wished there was a way to more seamlessly share music? TrainTrax is a music streaming middleware that lets cross platform users share music without pulling out the aux cord.
## How it Works
The app authenticates a “host” user sign through their Apple Music or Spotify Premium accounts and let's them create a party where they can invite friends to upload music to a shared playlist. Friends with or without those streaming service accounts can port through the host account to queue up their favorite songs. Hear a song you like? TrainTrax uses Button to deep links songs directly to your iTunes account, so that amazing song you heard is just a click away from being yours.
## How We Built It
The application is built with Swift 3 and Node.js/Express. A RESTful API let’s users create parties, invite friends, and add songs to a queue. The app integrates with Button to deep link users to songs on iTunes, letting them purchase songs directly through the application.
## Challenges We Ran Into
• The application depended a lot on third party tools, which did not always have great documentation or support.
• This was the first hackathon for three of our four members, so a lot of the experience came with a learning curve. In the spirit of collaboration, our team approached this as a learning opportunity, and each member worked to develop a new skill to support the building of the application. The end result was an experience focused more on learning and less on optimization.
• Rain.
## Accomplishments that we're proud of
• SDK Integrations: Successful integration with Apple Music and Spotify SDKs!
• Button: Deep linking with Button
• UX: There are some strange UX flows involved with adding songs to a shared playlist, but we kicked of the project with a post-it design thinking brainstorm session that set us up well for creating these complex user flows later on.
• Team bonding: Most of us just met on Friday, and we built a strong fun team culture.
## What we learned
Everyone on our team learned different things.
## What's next for TrainTrax
• A web application for non-iPhone users to host and join parties
• Improved UI and additional features to fine tune the user experience — we've got a lot of ideas for the next version in the pipeline, including some already designed in this prototype: [TrainTrax prototype link](https://invis.io/CSAIRSU6U#/219754962_Invision-_User_Types)
|
## Inspiration
The inspiration for this project likely comes from the need to create a music platform that addresses **privacy**, **safety**, and plagiarism concerns. By storing music on a **blockchain database**, the platform ensures that users' music is safe and secure and cannot be tampered with or stolen. In addition, the platform likely addresses plagiarism concerns by calculating the similarity of uploaded tracks and ensuring that they are not plagiarized. The use of blockchain technology ensures that the platform is decentralized and there is no single point of failure, making it more resistant to hacking and other security threats. The comparison algorithm helps users discover similar tracks and explore new music without compromising their privacy or the security of their data.
## What it does
Our project is aimed at creating a website where users can upload their music tracks to a blockchain database. The website will use advanced algorithms to compare the uploaded tracks with all the music tracks already stored on the blockchain database. The website will then display the biggest similarity rate of the uploaded track with the music stored on the blockchain.
## How we built it
We are using the provided APIs and tools like **Estuary**. Also, we create a advanced comparison algorithm to compute the similarity rate comparing to all the music stored on the blockchain. We have implemented the following features: Music Upload, Blockchain Database, Music Comparison, Similarity Rate, Music Player and User Dashboard.
## Challenges we ran into
There are several challenges that we may encounter when building Musichain that utilizes blockchain and advanced algorithms.
One of the main challenges is ensuring that the platform is scalable and can handle large amounts of data. Storing music on a blockchain database can be resource-intensive, and as more users upload tracks, the platform must be able to handle the increased load. Gladly, we use **Estuary** as our blockchain database to avoid a lot of unnecessary problems and significantly improve read and run speeds.
Another challenge is ensuring that the comparison algorithm is accurate and effective. The algorithm must be able to analyze a large amount of data quickly and accurately. While there are many similar applications on the market, our focus is on providing reliable comparisons rather than recommendations. To achieve this, we have streamlined and simplified the music extraction feature, resulting in a higher accuracy rate and faster program performance. By prioritizing simplicity and efficiency, we aim to provide a superior user experience compared to other applications with similar features.
Additionally, ensuring that the platform is secure and free from hacking or other security threats is critical. With sensitive user data and intellectual property at stake, the platform must be designed with security in mind, and appropriate measures must be taken to ensure that the platform is protected from external threats.
Overall, building a music platform that utilizes blockchain and advanced algorithms is a complex undertaking that requires careful consideration of scalability, accuracy, security, and copyright issues.
## Accomplishments that we're proud of
The following are the main features of Musichain:
1. Music Upload: Users will be able to upload their music tracks to the website. The website will accept various file formats such as MP3, WAV, and FLAC.
2. Blockchain Database: The music tracks uploaded by the users will be stored on a blockchain database. This will ensure the security and immutability of the music tracks.
3. Music Comparison: The website will use advanced algorithms to compare the uploaded music track with all the music tracks already stored on the blockchain database. The comparison algorithm will look for similarities in various parameters such as rhythm, melody, and harmonies.
4. Similarity Rate: The website will display the biggest similarity rate of the uploaded track with the music stored on the blockchain. This will help users identify similar tracks and explore new music.
5. Music Player: The website will have a built-in music player that will allow users to play the uploaded music tracks. The music player will have various features such as volume control, playback speed, and equalizer.
6. User Dashboard: The website will have a user dashboard where users can manage their uploaded tracks, view their play count, and see their similarity rates.
In conclusion, our project aims to create a music sharing platform that leverages the power of blockchain technology to ensure the security and immutability of the uploaded tracks. The music comparison feature will allow users to discover new music and connect with other artists.
## What we learned
There are several things that we have learnt from building Musichain platform that utilizes blockchain and advanced algorithms:
1. The power of blockchain: Using blockchain technology to store and manage music content provides a high level of security and immutability, making it a powerful tool for data management and storage.
2. The importance of privacy and security: When dealing with sensitive user data and intellectual property, privacy and security must be prioritized to ensure that user data is protected and secure.
3. The benefits of advanced algorithms: Advanced algorithms can be used to analyze large amounts of data quickly and accurately, providing meaningful recommendations to users and helping them discover new music. In the meantime, we need to streamline our algorithm to achieve pinpoint accuracy and efficiency.
## What's next for Musichain
After two-day hard work, there are still some improvement to be done for Musichain:
1. Refining and improving the compression algorithm: Developing a more efficient and effective compression algorithm could help to reduce the size of music files and improve the platform's storage capabilities.
2. Integrating artificial intelligence: Incorporating artificial intelligence could help to improve the accuracy of the platform's ability to analyze music, as well as enhance the platform's search capabilities.
3. Building partnerships and collaborations: As the platform grows, building partnerships with other companies and organizations in the music industry, as well as with AI and compression algorithm experts, could help to further expand the platform's capabilities.
|
partial
|
## Off The Grid
Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app
# Inspiration
Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better.
# Technology Used
Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access.
# Challenges
Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas
# Team Members
Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project!
|
## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces.
|
## Inspiration
We want to connect the world.
From the multiple 24-hour libraries to language tutors and math question centers, Harvard College does a lot to support their student's education. However, one area that the college lacks in providing students with enough support is with providing the student body with a platform to find other studying for their same classes and form study groups from those people.
## What it does
Our app allows students to indicate which classes they are taking and which ones they are currently studying for (or want to be studying for). Then, the user will be prompted with a map showing the locations of other students who are studying for the class at that same time and are also looking for a study partner. This notion of location matching with people based on what class you want to study for can be expanded more broadly to people looking to find people in the area with similar interests as them.
One unique feature is it automatically pairs you with peers via geolocation based on your interests. The user only adds interests in our algorithm does the matches and will send a notification with a suggested "Meeting Location". Later on we plan on implementing the confirmation of the meeting functionality.
We recorded a live demo of the application running on an iPhone Xs to display these features.
## How we built it
We used swift, python, google cloud, and Firebase.
## Challenges we ran into
200 but to name a few: Design and swift integration and GitHub file limits.
## Accomplishments that we're proud of
The idea is really cool. We believe this can be used in multiple settings, not only universities, to make people find partners to study or to hangout.
## What we learned
We learned about how to connect swift to firebase and we learned how to get GPS location from the phone using CoreLocation. We also learned about implementing apple maps into an ios application and how to create annotations on the map. As well as integration with Apple notifications using Firebase.
## What's next for Conquering Introversion
Building out this peer meeting app to connect people with similar interests and who are in the same classes.
|
winning
|
## Inspiration
The goal of Sonr is in empowering users to be able to have true digital ownership of their data connected with our team. By tapping on Sonr's network and analyzing the data in the hands of their owners, we generate value and offer users insights into themselves as digitally connected beings, all in a decentralized manner.
## What it does
Diffusion is a data visualization platform for users to visualize data from their other decentralized applications (dApps) on the Sonr blockchain. Users can choose which dApps they wish to connect to Diffusion, before the AI model analyzes the data and brings attention to potential trends in the user's behavior on the dApps. At the end of the analysis, the input data is not stored on our platform to ensure that the privacy of the user is maintained.
## How we built it
Our brainstorming sessions were conducted in Miro before we began building a lofi prototype of the mobile application in Figma. Sentence-transformers were used to build our AI model and also perform the chat summary feature. Google app engine was then used to deploy our AI model and allow our Flutter application to make API calls to fetch the data before visualizing it on the application using Syncfusion.
## Challenges we ran into
As Sonr's native messaging application Beam was not yet released to the public, we exported chat history from Telegram to act as the pseudo-dataset as a proof-of-concept.
We were also limited by what the Sonr blockchain could offer. For example, we initially intended to enable users to voluntarily share their data and use that information to aggregate averages and provide more meaningful insights into their personal habits. However, there is currently no way for you to give permission to others to access your data.
Additionally, the existing schemas available on the Sonr blockchain were not suitable for our use case which required machine learning i.e. Lists were not yet supported. Hence, we had to switch to deploying APIs with Flask / App Engine to allow our frontend to query data schemas.
## Accomplishments that we're proud of
Given the limited time and many issues we faced, we are proud that we still managed to adapt our application to still function as intended. Additionally, the team was relatively new to Flutter and we had to quickly learn and adapt to the native Dart programming language. Through pair programming, we managed to code quickly and efficiently, allowing us to cover the many components of our application.
## What we learned
With how new the Web3 space is, we have to constantly learn and adapt to problems and bugs that we face as there is not much help and documentation available for reference.
## What's next for Diffusion.io
As Sonr rolls out more dApps in their ecosystem, support for these dApps should follow suit. Additional metrics that the model will be able to predict can also be constantly developed by improving our AI model.
|
## Inspiration
Everyone has dreams and aspirations. Whether it’s saving up for education, breaking into the music industry as a small artist, or travelling the world. Yet, too often, we don’t have the financial capacity for them to be a reality. We wanted to create an app that helps bridge this gap. Introducing Dream with Us, a network reshaping how people achieve their dreams through decentralized, peer-to-peer transaction funding.
## What it does
Dream with Us is a platform that enables individuals to support the aspirations of creators, entrepreneurs, and everyday dreamers. Through our app, users can browse dreams and aspirations of others. Users can show their support and donate money in the form of cryptocurrency. At the same time, it’s a space for individuals to share their own aspirations and gain support. Our app helps connect and accelerate a community of diverse dreamers.
## How we built it
Our frontend is built using Svelte, JavaScript, and TailwindCSS. The backend is built using Motoko and Coinbase API. Collectively, these allow our app to run smart contracts and manage decentralized transactions. Lastly, the entire app is hosted and deployed on the ICP blockchain. We used two canisters (which act like smart containers similar to Docker) to separate the frontend and backend.
## Challenges we ran into
* Blockchain Transaction APIs: There were limitations with the transaction APIs for blockchain payments and we had to develop custom solutions.
* Integration of ICP with UI Libraries: Since ICP project folder requires a lot of specific version dependencies, this led to compatibility issues with most popular UI libraries. We had to forgo using these libraries entirely during our development.
* Deployment and Testing: Deploying and testing decentralized applications (dApps) on the ICP, especially containerizing both frontend and backend canisters, was tricky. It required in-depth knowledge of the ICP’s architecture and its nuances with smart contract deployment.
* Initial Development Environment Setup: Setting up the development environment and coordinating between Docker and dfx CLI took time and troubleshooting.
## Accomplishments that we're proud of
We’re proud to have fully deployed both the frontend and backend canisters on the ICP blockchain, making our app completely decentralized. Additionally, we successfully integrated the Coinbase API to handle secure, real-time ICP token transactions between users and dreamers, allowing seamless donations.
## What we learned
We learned about Web3, the decentralized nature of blockchain technology, and how it compares to traditional systems. This gave us a deeper understanding of its potential for peer-to-peer interactions. Furthermore, we gained hands-on experience with the ICP's communication mechanisms between devices and smart contracts, allowing us to build scalable, secure decentralized apps. We also got to meet awesome teams, and people along the way :)
## What's next for Dream with Us
* Implement direct peer-to-peer (P2P) transactions on the blockchain with added security layers like checksums and multi-signature verification
* Introduce tiered subscriptions and investment caps to allow different levels of engagement, from small perks to larger equity-like arrangements for backers.
* Provide analytics for dreamers to track their funding and backer engagement and tools to update supporters on their progress.
|
## Summary
OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource.
## Inspiration
The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place!
## What it does
OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation.
## How we built it
This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain.
## Challenges we ran into
Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology!
## Accomplishments that we're proud of
One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end!
## What we learned
* Fullstack Web Development (with React.js frontend development and Python Flask backend development)
* Web3.0 & Security (with Solidity & Ethereum Blockchain)
## What's next for OrganSafe
After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
|
partial
|
# The Ultimate Water Heater
February 2018
## Authors
This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan.
## About
Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource.
Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future.
Some key features we have:
* 90% energy efficiency
* An average rate of roughly 10 kW/hr of energy consumption
* Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure
* Clean and easily understood UI for typical household users
* Incorporation of the Internet of Things for convenience of use and versatility of application
* Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\*
* Cheap cost of installation and immediate returns on investment
## Inspiration
By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs.
So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better.
## How It Works
Our project essentially operates in several core parts running simulataneously:
* Arduino (101)
* Heating Mechanism
* Mobile Device Bluetooth User Interface
* Servers connecting to the IoT (and servicing via Alexa)
Repeat all processes simultaneously
The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity.
The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;)
The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger.
The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age.
## Built With:
(In no particular order of importance...)
* RhoAI
* R
* Balsamiq
* C++ (Arduino 101)
* Node.js
* Tears
* HTML
* Alexa API
* Swift, Xcode
* BLE
* Buckets and Water
* Java
* RXTX (Serial Communication Library)
* Mathematica
* MatLab (assistance)
* Red Bull, Soylent
* Tetrix (for support)
* Home Depot
* Electronics Express
* Breadboard, resistors, capacitors, jumper cables
* Arduino Digital Temperature Sensor (DS18B20)
* Electric Tape, Duct Tape
* Funnel, for testing
* Excel
* Javascript
* jQuery
* Intense Sleep Deprivation
* The wonderful support of the people around us, and TreeHacks as a whole. Thank you all!
\*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2>
Special thanks to our awesome friends Michelle and Darren for providing moral support in person!
|
## Inspiration
Over one-fourth of Canadians during their lifetimes will have to deal with water damage in their homes. This is an issue that causes many Canadians overwhelming stress from the sheer economical and residential implications.
As an effort to assist and solve these very core issues, we have designed a solution that will allow for future leaks to be avoidable. Our prototype system made, composed of software and hardware will ensure house leaks are a thing of the past!
## What is our planned solution?
To prevent leaks, we have designed a system of components that when functioning together, would allow the user to monitor the status of their plumbing systems.
Our system is comprised of:
>
> Two types of leak detection hardware
>
>
> * Acoustic leak detectors: monitors abnormal sounds of pipes.
> * Water detection probes: monitors the presence of water in unwanted areas.
>
>
>
Our hardware components will have the ability to send data to a local network, to then be stored in the cloud.
>
> Software components
>
>
> * Secure cloud to store vital information regarding pipe leakages.
> * Future planned app/website with the ability to receive such information
>
>
>
## Business Aspect of Leakio
Standalone, this solution is profitable through the aspect of selling the specific hardware to consumers. Although for insurance companies, this is a vital solution that has the potential to save millions of dollars.
It is far more economical to prevent a leak, rather than fix it when it already happens. The time of paying the average cost of $10,900 US dollars to fix water damage or a freezing claim is now avoidable!
In addition to saved funds, our planned system will be able to send information to insurance companies for specific data purposes such as which houses or areas have the most leaks, or individual risk assessment. This would allow insurance companies to more appropriately create better rates for the consumer, for the benefit of both consumer and insurance company.
### Software
Front End:
This includes our app design in Figma, which was crafted using knowledge on proper design and ratios. Specifically, we wanted to create an app design that looked simple but had all the complex features that would seem professional. This is something we are proud of, as we feel this component was successful.
Back End:
PHP, MySQL, Python
### Hardware
Electrical
* A custom PCB is designed from scratch using EAGLE
* Consists of USBC charging port, lithium battery charging circuit, ESP32, Water sensor connector, microphone connector
* The water sensor and microphone are extended from the PCB which is why they need a connector
3D-model
* Hub contains all the electronics and the sensors
* Easy to install design and places the microphone within the walls close to the pipes
## Challenges we ran into
Front-End:
There were many challenges we ran into, especially regarding some technical aspects of Figma. Although the most challenging aspect in this would’ve been the implementation of the design.
Back-End:
This is where most challenges were faced, which includes the making of the acoustic leak detector, proper sound recognition, cloud development, and data transfer.
It was the first time any of us had used MySQL, and we created it on the Google Cloud SQL platform. We also had to use both Python and PHP to retrieve and send data, two languages we are not super familiar with.
We also had no idea how to set up a neural network with PyTorch. Also finding the proper data that can be used to train was also very difficult.
## Accomplishments that we're proud of
Learning a lot of new things within a short period of time.
## What we learned
Google Cloud:
Creating a MySQL database and setting up a Deep Learning VM.
MySQL:
Using MySQL and syntaxes, learning PHP.
Machine Learning:
How to set up Pytorch.
PCB Design:
Learning how to use EAGLE to design PCBs.
Raspberry Pi:
Autorun Python scripts and splitting .wav files.
Others:
Not to leave the recording to the last hour. It is hard to cut to 3 minutes with an explanation and demo.
## What's next for Leakio
* Properly implement audio classification using PyTorch
* Possibly create a network of devices to use in a single home
* Find more economical components
* Code for ESP32 to PHP to Web Server
* Test on an ESP32
|
## Inspiration
Our inspiration for this project is that we, like many people, enjoy taking long showers. However, this is bad for the environment and it wastes water as well. This app is a way for us and other people to monitor their water usage effectively and take steps to conserve water, especially in the face of worsening droughts.
## What it does
Solution for effective day-to-day monitoring of water usage
Most people only know how much water they would use monthly from their water bill
This project is meant to help residents effectively locate which areas of their home are using too much water, potentially helping them assess leaks.
It also helps them save money by locating areas that are conserving water well so they can continue to save water in that area.
## How we built it
We chose to approach this project by using machine learning to help a user conserve their water by having an algorithm predict what sources of water (such as a toilet or a faucet) are using the most amount of water. We came up with a concept for the app and created a wireframe to outline it as well. The app would be paired with the use of sensors, which can be installed in the pipes of the various water sources in the house. We built this project using Google Colab, Canva for the wireframe, and Excel for the data formatting.
## Challenges we ran into
We spent a bit of time trying to look for datasets to match our goals, and we ended up having to format some of them differently.
We also had to change how our graphs looked because we wanted to display the data in the clearest way possible.
## Accomplishments that we're proud of
We formatted and used datasets pretty well. We are glad that we created a basic wireframe to model our app and we are proud we created a finished product, even if its not in a functional form yet.
## What we learned
We learned how to deal with data sets and how to put lines into graphs and we also learned how to analyze graphs at a deeper level and to make sure they matched the goals of our project.
## What's next for Hydrosaver
In the future we hope to create the sensor for this project and find a way to pair it with an accompanying app. We also hope to gain access to or collect more comprehensive data about water consumption in homes.
|
winning
|
## Inspiration
Looking around in the world, we see a large amount of passionate students and enthusiasts keen to learn more about the latest technologies and discipline. Case studies has shown that 1 out of 2 students always find himself in a dilemma of “WHAT TO DO NEXT?” after selecting their field of study. Similarly, not to talk of students, everyone of us needs a clear picture of roadmap to follow before starting off with anything, but at the end, we are left with all of those youtube speakers saying nothing for 10-20 minutes, long essay type blogs and articles, most of them are not even relatable and we follow the roadmap as far as the tab remains open in our browser.
## What it does
To cater all of the above problems, we have been working for past many days to device a solution to not only make the things work out, but to design the interface such that user finds familiarity with it. Our solution provides our audience with a platform to create roadmaps, add slots within their roadmaps, share it and follow others work. You can also fork other peoples roadmaps and change it as per your convenience.
We are providing enterprises and companies with a facility of Premium Roadmaps to set up purchase amount for the roadmap, this can help them to post their courses and gain revenue from it. Next to provide an intuitive UI for the website, we are using vertical timelines to show all the roadmaps, you can see all of your followed roadmaps at once, or filter some combinations of them. To get an insight about how well your roadmap is doing, we have also added the infographics of line chart to give user an overview of each slots performance.
We are trying to apply the Market Basket analysis on the dataset of roadmaps collected to give our users more insights about what their followers are doing taking interests in. Applying the data analysis strategies, we will be ranking all the roadmaps on the basis of their clicks and likes.
## How I built it
We firstly went through the different techniques to brainstorm for the idea, hence concluded with this one which helps a larger developers community. We designed all of the software requierments for our audience, and developed the userflow keeping the strategic ideology of our users in mind. Then, we went throught the collaborative stages of developing the user interface on the high-fidelity prototyping tool of FIGMA. We kept the basis of the designs on the grounds of Jigsaw Puzzle in which we are just attaching pieces to one another and getting an abstracted imagery as a result. This also helped us to make the design more user friendly.
Finally we came to the developing stage of the solution. To work with the front-end of the design, we preferred to use **Next.js**, is an open-source React front-end development web framework to cater the server-side rendering. To go with the ML model, we preferred to use **Python** hence, we uploaded a jupyter notebook for the model. The back-end was done in spring boot with **POSTGRESQL** for DBMS.
## How NWHacks Helped
NWhacks helped us with a platform to not only socialize with the people but also to collaborate with them on projects. Individually we don't own that much of valuable skills, but as a team we make up a strategy worthy of building such a big solution in mere 24 hours. Moreover, NWHacks provided us with all the resources and workshops which literally helped us not even in APIs application but using git to for version controlling. People might say what you can learn in 24 hours, after attending this hackathon, I can claim the importance of Hackathons in learning.
## Challenges I ran into
Since we came up with this idea, we are looking for other competitors working in the same field and we found that <https://roadmap.sh/> is also providing the visuals of a roadmap for the betterment of developer community. Now the biggest challenge of us was to run them out and overtake them with better intuitive designs as well as more services. So we managed to bring the idea of Jigsaw Pieces to the project which provides the user with well known jigsaw idea of attaching with one another, this helped us to overtake them in user-friendly interface. Secondly we expanded our services and also managed to provide the facility of making, forking and following a roadmap in our solution.
Moreover we all were to new the data analyst and Machine learning models field of stufy, so we took our time to learn about it prior to the hackathon and managed to design an Apriori Algorithm for our Market basket Analysis. Finally to use any Groundswell technology, we decided to use Heroku for deploying our solution.
## Impact
**#social\_for\_good**
Every solution can not only help all the developers save their time looking for the right roadmaps, but will also help them in manage all at once. Moreover since we are also providing with the features of Apriori Algorithm of Machine learning integrated into this, people trying to get to know about their audience can also get help from this platform. The impact this solution is creating, is not only limited to the education purposes, everyone like the chef Arthur in the video can help connect to their audience more specifically.
Secondly, the authors writing our the best roadmaps of all time will get sponsored by other companies to add their events or their services into their roadmap. This can help the authors to earn through these roadmaps same as that of the premium users.
## Accomplishments that I'm proud of
We as a team are proud to build a solution our community was striving for for a long time. We see thousands of people wandering for getting a good roadmap and insight of the field they chose, a course they want to follow or a strategy they need to adapt. We accomplished a solution that helps not even developers, but everyone. For example take the example of Kevin, Martha and Arthur in the video. How they got rid of their problems by using our tool.
## What's next for JigMap
Since we have created a mockup of the complete website with currently no user active. Our first target will be to make it a versatile working website with all functionalities for the users. Linking the machine learning models with the website will be our next priority, as we want to make our Apriori Algorithm more accurate and stronger not even suggesting costumers more roadmaps, but also giving authors more insights into what their followers are seeing. Finally, we will be working many cooperations and companies to launch their courses on our website, and hiring campus ambassadors to work with us and post roadmaps for the disciplines in their university. This idea of us has a long way to go, not just in developers community but general public too.
|
## Inspiration
As college students, we both agreed that fun trips definitely make college memorable. However, we are sometimes hesitant about initiating plans because of the low probability of plans making it out of the group chat. Our biggest problem was being able to find unique but local activities to do with a big group, and that inspired the creation of PawPals.
## What it does
Based on a list of criteria, we prompt OpenAI's API to generate the perfect itinerary for your trip, breaking down hour by hour a schedule of suggestions to have an amazing trip. This relieves some of the underlying stresses of planning trips such as research and decision-making.
## How we built it
On the front-end, we utilized Figma to develop an UI design that we then translated into React.js to develop a web app. We connect a Python back-end through the utilization of Fast API to integrate the usage of SQLite and OpenAI API into our web application.
## Challenges we ran into
Some challenges we ran into were the lack of experience in Full-Stack development. We did not have the strongest technical skill sets needed; thus, we took this time to explore and learn new technologies in order to accomplish the project. Furthermore, we recognized the long time it takes for OpenAI API to generate text, leading us to in the future output the generated text as a stream utilizing the SDK for it by Vercel.
## Accomplishments that we're proud of
As a team of two, we were very proud of our final product and took pride in our work. We came up with a plan and vision that we truly dedicated ourselves to in order to accomplish it. Even though we have not had too much experience in creating a full-stack application and developing a Figma design, we decided to take the leap in exploring these domains, going out of our comfort zone which allowed us to truly learn a lot from this experience.
## What we learned
We learned the value of coming up with a plan and solving problems that arise creatively. There is always a solution to a goal, and we found that we simply need to find it. Furthermore, we both were able to develop ourselves technically as we explored new domains and technologies, pushing our knowledge to greater breadth and depth. We were also able to meet many people along the way that would provide pieces of advice and guidance on paths and technologies to look into. These people accelerated the learning process and truly added a lot to the hackathon experience. Overall, this hackathon was an amazing learning experience for us!
## What's next for PawPals
We hope to deploy the site on Vercel and develop a mobile application of the app. We believe with the streaming technology, we can create faster response time. Furthermore, through some more scaling in interconnectivity with friends, budgeting, and schedule adjusting, we will be able to create an application that will definitely shift the travel planning industry!
|
## Inspiration
Our inspiration came from the importance of connecting with family and cherishing proud personal stories. We recognized that many elderly people in nursing homes feel isolated from their families, despite the wealth of memories they carry. These memories hold so much value about family history, wisdom, and identity. By creating a platform that enables them to reflect on and share these moments, we aimed to bridge generational gaps and strengthen family bonds. Through storytelling, we want to foster a tight family bond, ensuring that cherished memories are passed down and that the elderly feel heard, valued, and connected.
We wanted to emphasize the story aspect of these memories. When people want to share their memories with their family, especially virtually, they aren't able to fully relive or cherish that memory- a simple text message can't fully do justice to a fond memory. Thus, we wanted to bring life into these memories that shared within families online and especially provide elderly people who might not meet their families often to have an immersive experience with their family's memories.
## What it does
Memento allows families to document fond memories that they have, and share them to the user. Families can upload memories that contain a date, description, and image. We target this product to the elderly in nursing homes who are usually alone and can benefit from having someone like family to talk to. The elderly user can then speak to the application and can have a conversation about the details of any memory. The application will also display the most relevant image to the conversation to help improve the experience. This enables the elderly user to feel like they are talking to a family member or someone they know well. It allows them to stay connected with their loved ones without the continuous presence of them.
## How we built it
We designed Memento to be simple and accessible for both elderly users and their families. For this reason, we used **Reflex** to implement an elegant UI, and implemented a **Chroma** database to store the memories and their embeddings for search. We also integrated Whisper, a speech-to-text model through **Groq’s** fast inference API to decode what the elderly person is saying. Using this input, we query our database, and feed this information through **Gemini**, an LLM developed by Google, to give a coherent response that incorporates information from the families’ inputs. Finally, we used **Deepgram’s** text-to-speech model to convert the LLM’s outputs back to an audio format that we could speak back to the elderly user.
## Challenges we ran into
* **Integration**: It was difficult to integrate all of the sponsor’s softwares into the final application; we had to pore over documentation while becoming familiar with each API, which led to many hours of debugging.
* **Non-determinism**: Our models were non-deterministic; errors caused by specific outputs from the LLM were hard to replicate. Due to the background noise, we also could not efficiently test our speech-to-text model’s accuracy.
* **Inference speed**: Throughout this application, we make many API calls to large models, such as Whisper, Gemini, and the Aura TTS model. Because of this, we had to find clever optimizations to speed up the inference time to quickly speak back to the elderly user, especially since the WiFi was unusable most of the time.
## Accomplishments that we're proud of
* **Design and User Experience**: We are proud of our design since it encompasses the mood we were aiming for – a warm, welcoming environment, focusing on the good things that happen in life.
* **Large Language Model and Vector Search**: We are especially proud of how the LLM turned out and how well the RAG model worked. We spent lots of time prompting the different components to create the warm, empathetic, and welcoming environment the LLM provides.
* **TTS and STT**: Although we struggled a bit with this part, we are really proud about how it turned out. We feel we did a great job encompassing the ideals of the product by allowing users to reflect on past memories and connect closer with family.
## What we learned
Working with STT and TTS models: many members of our group had never worked with speech-to-text or text-to-speech models, so this was a learning experience for all of us. We learned about the impressive accuracy that the state-of-the-art models are able to achieve but also encountered some of the drawbacks of these models, since many of them don’t work as well with moderate levels of background noise.
How to make a great UI:
## What's next for Memento
Because of time constraints, there were many features and improvements we wanted to implement but could not.
* **Continuous LLM Conversation**: We wanted to be able to talk to the LLM continuously without having to press a microphone button. Due to time constraints, we were not able to implement this feature
* **User Personalization and Customization**: We aimed to personalize the website to users by adding custom themes, colors, and fonts, but we ran out of time to do so.
|
losing
|
## Inspiration
Peer-review is critical to modern science, engineering, and healthcare
endeavors. However, the system for implementing this process has lagged behind
and results in expensive costs for publishing and accessing material, long turn
around times reminiscent of snail-mail, and shockingly opaque editorial
practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print
server" ([arXiv](https://arxiv.org)) which was the early internet's improvement
upon snail-mailing articles to researchers around the world. This pre-print
server is maintained by a single university, and is constantly requesting
donations to keep up the servers and maintenance. While researchers widely
acknowledge the importance of the pre-print server, there is no peer-review
incorporated, and none planned due to technical reasons. Thus, researchers are
stuck with spending >$1000 per paper to be published in journals, all the while
individual article access can cost as high as $32 per paper!
([source](https://www.nature.com/subscriptions/purchasing.html)). For reference,
a single PhD thesis can contain >150 references, or essentially cost $4800 if
purchased individually.
The recent advance of blockchain and smart contract technology
([Ethereum](https://www.ethereum.org/)) coupled with decentralized
file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io))
naturally lead us to believe that archaic journals and editors could
be bypassed. We created our manuscript distribution and reviewing
platform based on the arXiv, but in a completely decentralized manner.
Users utilize, maintain, and grow the network of scholarship by simply
running a simple program and web interface.
## What it does
arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service.
An author (wallet address) will come with a bomb-ass paper they wrote.
In order to "upload" their paper to the blockchain, they will first
need to add their file/directory to the IPFS distributed file system. This will
produce a unique reference number (DOI is currently used in journals)
and hash corresponding to the current paper file/directory.
The author can then use their address on the Ethereum network to create a new contract
to submit the paper using this reference number and paperID. In this way, there will
be one paper per contract. The only other action the
author can make to that paper is submitting another draft.
Others can review and comment on papers, but an address can not comment/review
its own paper. The reviews are rated on a "work needed", "acceptable" basis
and the reviewer can also upload an IPFS hash of their comments file/directory.
Protection is also built in such that others can not submit revisions of the
original author's paper.
The blockchain will have a record of the initial paper submitted, revisions made
by the author, and comments/reviews made by peers. The beauty of all of this is
one can see the full transaction histories and reconstruct the full evolution of
the document. One can see the initial draft, all suggestions from reviewers,
how many reviewers, and how many of them think the final draft is reasonable.
## How we built it
There are 2 main back-end components, the IPFS file hosting service
and the Ethereum blockchain smart contracts. They are bridged together
with ([MetaMask](https://metamask.io/)), a tool for connecting
the distributed blockchain world, and by extension the distributed
papers, to a web browser.
We designed smart contracts in Solidity. The IPFS interface was built using a
combination of Bash, HTML, and a lot of regex!
. Then we connected the IPFS distributed net with the Ethereum Blockchain using
MetaMask and Javascript.
## Challenges we ran into
On the Ethereum side, setting up the Truffle Ethereum framework and test
networks were challenging. Learning the limits of Solidity and constantly
reminding ourselves that we had to remain decentralized was hard!
The IPFS side required a lot of clever regex-ing. Ensuring that public access
to researchers manuscript and review history requires other proper identification
and distribution on the network.
The hardest part was using MetaMask and Javascript to call our contracts
and connect the blockchain to the browser. We struggled for about hours
trying to get javascript to deploy a contract on the blockchain. We were all
new to functional programming.
## Accomplishments that we're proud of
Closing all the curly bois and close parentheticals in javascript.
Learning a whole lot about the blockchain and IPFS. We went into this
weekend wanting to learning about how the blockchain worked, and came out
learning about Solidity, IPFS, Javascript, and a whole lot more. You can
see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf)
## What we learned
We went into this with knowledge that was a way to write smart contracts,
that IPFS existed, and minimal Javascript.
We learned intimate knowledge of setting up Ethereum Truffle frameworks,
Ganache, and test networks along with the development side of Ethereum
Dapps like the Solidity language, and javascript tests with the Mocha framework.
We learned how to navigate the filespace of IPFS, hash and and organize directories,
and how the file distribution works on a P2P swarm.
## What's next for arXain
With some more extensive testing, arXain is ready for the Ropsten test network
*at the least*. If we had a little more ETH to spare, we would consider launching
our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can
be accessed by any IPFS node.
|
## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
|
## Inspiration
As avid readers ourselves, we love the work that authors put out, and we are deeply saddened by the relative decline of the medium. We believe that democratizing the writing process and giving power back to the writers is the way to revitalize the art form literature, and we believe that utilizing blockchain technology can help us get closer to that ideal.
## What it does
LitHub connects authors with readers through eluv.io's NFT trading platform, allowing authors to sell their literature as exclusive NFTs and readers to have exclusive access to their purchases on our platform.
## How we built it
We utilized the eluv.io API to enable upload, download, and NFT trading functionality for our backend. We leveraged CockroachDB to store user information and we used HTML/CSS to create our user-facing frontend, and deployed our application we used Microsoft Azure.
## Challenges we ran into
One of the main challenges we ran into was understanding the various APIs that we were working with over a short period of time. As this was our first time working with NFTs/blockchain, eluv.io was a particularly new experience to us, and it took some time, but we were able to overcome many of the challenges we faced thanks to the help from mentors from eluv.io. Another challenge we ran into was actually connecting the pieces of our project together as we used many different pieces of technology, but careful coordination and well-planned functional abstraction made the ease of integration a pleasant surprise.
## Accomplishments that we're proud of
We're proud of coming up with an innovative solution that can help level the playing field for writers and for creating a platform that accomplishes this using many of the platforms that event sponsors provided. We are also proud of gaining familiarity with a variety of different platforms in a short period of time and showing resilience in the face of such a large task.
## What we learned
We learned quite a few things while working on this project. Firstly, we learned a lot about blockchain space, and how to utilize this technology during development, and what problems they can solve. Before this event, nobody in our group had much exposure to this field, so it was a welcome experience In addition, some of us who were less familiar with full-stack development got exposure to Node and Express, and we all got to reapply concepts we learned when working with other databases to CockroachDB's user-friendly interface.
## What's next for LitHub
The main next step for LitHub would be to scale our application to handle a larger user base. From there we hope to share LitHub amongst authors and readers around the world so that they too can take partake in the universe of NFTs to safely share their passion.
|
winning
|
## Inspiration
Since the pandemic, millions of people worldwide have turned to online alternatives to replace public fitness facilities and other physical activities. At-home exercises have become widely acknowledged, but the problem is that there is no way of telling whether people are doing the exercises accurately and whether they notice potentially physically damaging bad habits they may have developed. Even now, those habits may continuously affect and damage their bodies if left unnoticed. That is why we created **Yudo**.
## What it does
Yudo is an exercise web app that uses **TensorFlow AI**, a custom-developed exercise detection algorithm, and **pose detection** to help users improve their form while doing various exercises.
Once you open the web app, select your desired workout and Yudo will provide a quick exercise demo video. The closer your form matches the demo, the higher your accuracy score will be. After completing an exercise, Yudo will provide feedback generated via **ChatGPT** to help users identify and correct the discrepancies in their form.
## How we built it
We first developed the connection between **TensorFlow** and streaming Livestream Video via **BlazePose** and **JSON**. We used the video's data and sent it to TensorFlow, which returned back a JSON object of the different nodes and coordinates which we used to draw the nodes onto a 2D canvas that updates every single frame and projected this on top of the video element. The continuous flow of JSON data from Tensorflow helped create a series of data sets of what different planks forms would look like. We used our own created data sets, took the relative positions of the relevant nodes, and then created mathematical formulas which matched that of the data sets.
After a discussion with Sean, a MLH member, we decided to integrate OpenAI into our project by having it provide feedback based on how well your plank form is. We did so by utilizing the **ExpressJS** back-end to handle requests for the AI-response endpoint. In the process, we also used **Nodaemon**, a process for continuously restarting servers on code change, to help with our development. We also used **Axios** to send data back and forth between the front end and backend
The front end was designed using **Figma** and **Procreate** to create a framework that we could base our **React** components on. Since it was our first time using React and Tensorflow, it took a lot of trial and error to get CSS and HTML elements to work with our React components.
## Challenges we ran into
* Learning and implementing TensorFlow AI and React for the first time during the hackathon
* Creating a mathematical algorithm that accurately measures the form of a user while performing a specific exercise
* Making visual elements appear and move smoothly on a live video feed
## Accomplishments that we're proud of
* This is our 2nd hackathon (except Darryl)
* Efficient and even work distribution between all team members
* Creation of our own data set to accurately model a specific exercise
* A visually aesthetic, mathematically accurate and working application!
## What we learned
* How to use TensorFlow AI and React
* Practical applications of mathematics in computer science algorithms
## What's next for Yudo
* Implementation of more exercises
* Faster and more accurate live video feed and accuracy score calculations
* Provide live feedback during the duration of the exercise
* Integrate a database for users to save their accuracy scores and track their progress
|
## Inspiration
We built an AI-powered physical trainer/therapist that provides real-time feedback and companionship as you exercise.
With the rise of digitization, people are spending more time indoors, leading to increasing trends of obesity and inactivity. We wanted to make it easier for people to get into health and fitness, ultimately improving lives by combating these the downsides of these trends. Our team built an AI-powered personal trainer that provides real-time feedback on exercise form using computer vision to analyze body movements. By leveraging state-of-the-art technologies, we aim to bring accessible and personalized fitness coaching to those who might feel isolated or have busy schedules, encouraging a more active lifestyle where it can otherwise be intimidating.
## What it does
Our AI personal trainer is a web application compatible with laptops equipped with webcams, designed to lower the barriers to fitness. When a user performs an exercise, the AI analyzes their movements in real-time using a pre-trained deep learning model. It provides immediate feedback in both textual and visual formats, correcting form and offering tips for improvement. The system tracks progress over time, offering personalized workout recommendations and gradually increasing difficulty based on performance. With voice guidance included, users receive tailored fitness coaching from anywhere, empowering them to stay consistent in their journey and helping to combat inactivity and lower the barriers of entry to the great world of fitness.
## How we built it
To create a solution that makes fitness more approachable, we focused on three main components:
Computer Vision Model: We utilized MediaPipe and its Pose Landmarks to detect and analyze users' body movements during exercises. MediaPipe's lightweight framework allowed us to efficiently assess posture and angles in real-time, which is crucial for providing immediate form correction and ensuring effective workouts.
Audio Interface: We initially planned to integrate OpenAI’s real-time API for seamless text-to-speech and speech-to-text capabilities, enhancing user interaction. However, due to time constraints with the newly released documentation, we implemented a hybrid solution using the Vosk API for speech recognition. While this approach introduced slightly higher latency, it enabled us to provide real-time auditory feedback, making the experience more engaging and accessible.
User Interface: The front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, manages communication between the AI model, audio interface, and user data. This setup allows the machine learning models to run efficiently, providing smooth real-time feedback without the need for powerful hardware.
On the user interface side, the front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, handles the communication between the AI model, audio interface, and the user's data.
## Challenges we ran into
One of the major challenges was integrating the real-time audio interface. We initially planned to use OpenAI’s real-time API, but due to the recent release of the documentation, we didn’t have enough time to fully implement it. This led us to use the Vosk API in conjunction with our system, which introduced increased codebase complexity in handling real-time feedback.
## Accomplishments that we're proud of
We're proud to have developed a functional AI personal trainer that combines computer vision and audio feedback to lower the barriers to fitness. Despite technical hurdles, we created a platform that can help people improve their health by making professional fitness guidance more accessible. Our application runs smoothly on various devices, making it easier for people to incorporate exercise into their daily lives and address the challenges of obesity and inactivity.
## What we learned
Through this project, we learned that sometimes you need to take a "back door" approach when the original plan doesn’t go as expected. Our experience with OpenAI’s real-time API taught us that even with exciting new technologies, there can be limitations or time constraints that require alternative solutions. In this case, we had to pivot to using the Vosk API alongside our real-time system, which, while not ideal, allowed us to continue forward. This experience reinforced the importance of flexibility and problem-solving when working on complex, innovative projects.
## What's next for AI Personal Trainer
Looking ahead, we plan to push the limits of the OpenAI real-time API to enhance performance and reduce latency, further improving the user experience. We aim to expand our exercise library and refine our feedback mechanisms to cater to users of all fitness levels. Developing a mobile app is also on our roadmap, increasing accessibility and convenience. Ultimately, we hope to collaborate with fitness professionals to validate and enhance our AI personal trainer, making it a reliable tool that encourages more people to lead healthier, active lives.
|
## Inspiration
As important as maintaining one's health is, we wanted to create something to help those interested in weightlifting to jump straight into it without fear of injury with an app that detects and warns users of poor form during important compound lifts.
## What it does
Our app analyzes your lifting form using a computer vision pose estimation model, calculates key points within a provided video in where the user exhibits poor form, and gives suggestions based on these key points.
## How we built it
* We used **Ultralytics' YOLOv8** pose estimation model to landmark and track a persons joints
* **Django** alongside the **Django REST framework** were used on the server side to build a **RESTful API**
* **React, TypeScript, and Tailwind CSS** were used to design the client and fetch data from the server
* We used Cloudflare's AI Worker API to access their Llama 3 LLM model to provide Chadbot
* Video files annotated with pose estimation landmarks were uploaded to Cloudflare's **R2 buckets**, which would then be served to the client to display to the user
* Adobe Express was used to generate key images used throughout the site.
* Git was used for version control and collaboration
## Challenges we ran into
We faced challenges in accurately detecting a person's landmarks (joints) and ensuring real-time feedback for users, as well as serving the annotated video result back to the client.
## Accomplishments that we're proud of
We learned and implemented technology of an unfamiliar field (computer vision and machine learning) in a project that builds upon our existing knowledge of full-stack web development.
## What we learned
We learned the importance of refining machine learning models, especially when it comes to pose estimation, where a user's landmarks can vary drastically based on various factors such as camera angle and distance from target.
## What's next for How’sMyForm?
We plan to enhance our community support and integrate personalized workout plans to further assist users in their fitness journeys, such as implementing new algorithms for different lifts (e.g. RDL, bicep curls) as well as determining the camera angle automatically.
|
winning
|
## Getting started: how to install our Chrome extension
1. Download the .crx file from our website <http://54.183.209.90/>
2. Open Chrome and type in "chrome://extensions" in the address bar
3. Drag the .crx file into that page! You are done!
Since this is still in development mode, you probably cannot install the extension by directly opening the CRX file. But the method above will work.
## Inspiration
Students nowadays use web browsers heavily, and we open tens of new tabs every day. We think it would be a great idea to make full use of such fragmented time to learn something new, like learning new words in another language, or for an exam. That’s where we got the idea of Tabby Word.
## What it does
Tabby Word is a Chrome extension that gracefully generates vocabulary flashcards on your new tabs. It supports multiple language vocabularies for learners of a new language, and exam-takers. It has an elegant design, and is customizable in color and style, so it can be a perfect fit for your own taste of desktop.
## How we built it
Firstly, we designed the user interface using Sketch. Then we used Flask in Python to build the back-end, and we used HTML, CSS and JavaScript to build the front-end UI and interactions. We studied the official documents of building Chrome extensions and packed the software to a Chrome extension eventually.
## Challenges we ran into
* Tuning CSS for better UI design and user experience
* The idea and methodologies of building Chrome extensions
## Accomplishments that we're proud of
* We found it truly useful!
* We built and debugged the backend in only three hours!
* We also built a landing page for our app.
## What we learned
* Teamwork truly matters in building such applications.
* There is a gap between front-end design and implementation; designers need to consider the technical difficulties in implementation.
## What's next for Tabby Word
* Integrate more vocabulary resources, pronunciation, example sentences, etc.
* Include the Ebbinghaus Forgetting Model, so that it could smartly remind people to revise the word only when they should.
* Improve UI design and offer more customizability in its UI.
|
## Inspiration
We were inspired by the complication of making flash cards. Making multiple flashcards while switching tabs can be annoying and making one singular flashcard without the creation tools open can be worse. We wanted to reduce the friction as much as possible to enhance learning. We also created a web application so that the complication can be completely reduced through sharing and exploration of other countries.
## What it does
Our chrome extension allows the user to enter terms and descriptions and save them easily as a CSV file for import. While our web page allows users to explore flashcards made by others worldwide.
## How we built it
We built the chrome extension using Javascript, CSS, and HTML while the web application was build with Django, DjangoRestFramework and React.
## Challenges we ran into
Chrome extensions prevent us from making inline changes to HTML thus making it hard to pass through input into the text boxes. There was also a lot of confusion around Django and React work.
## Accomplishments that we're proud of
We're proud of our application's promotion of sharing and of its practicality.
## What we learned
We learned a lot about how chrome extensions work and gained a much better understanding of Django and React.
## What's next for In a Flash
We want to incorporate pictures and we would like to polish up the web application. We also had image to text technology that we can integrate with pictures.
|
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
|
losing
|
## Inspiration
We wanted to create a multiplayer game that would allow anyone to join and participate freely. We couldn't decide on what platform to build for, so we got the idea to create a game that is so platform independent we could call it platform transcendent.. Since the game is played through email and sms, it can be played on any internet-enabled device, regardless of operating system, age, or capability. You could even participate through a public computer in a library, removing the need to own a device altogether!
## What it does
The game allows user-created scavenger hunts to be uploaded to the server. Then other users can join by emailing the relevant email address or texting commands to our phone number. The user will then be sent instructions on how to play and updates as the game goes on.
## How I built it
We have a Microsoft Azure server backend that implements the Twilio and SendGrid APIs. All of our code is written in python. When you send us a text or email, Twilio and SendGrid notify our server which processes the data, updates the server-side persistent records, and replies to the user with new information.
## Challenges I ran into
While sending emails is very straightforward with SendGrid and Twilio works well both for inbound and outbound texts, setting up inbound email turned out to be very difficult due to the need to update the mx registry, which takes a long time to propagate. Also debugging all of the game logic.
## Accomplishments that I'm proud of
It works! We have a sample game set-up and you could potentially win $5 Amazon Gift Cards!
## What I learned
Working with servers is a lot of work! Debugging code on a computer that you don't have direct access to can be quite a hassle.
## What's next for MailTrail
We want to improve and emphasize the ability for users to create their own scavenger hunts.
|
## Inspiration
Our main inspiration behind this product was to reduce the amount of time it takes for international wire transfer from one bank to another. Since we're all international students, we've experiences the frustration of slow and costly tuition payments of international wire transfers, often risking late penalties due to 1-5 day processing times. We did some research and identified that a lot of intermediary banks are involved in this process and we thought about how we could get them out of the picture. BlockWire was born from our vision to use blockchain technology to create direct connections between foreign banks, eliminating intermediaries. Our goal is to significantly reduce both time and cost for international transfers, making cross-border financial transactions faster, cheaper, and more accessible for students and beyond. This real-world problem inspired us to combine blockchain with other cutting-edge technologies to revolutionize international banking operations.
## What it does
BlockWire reduces the time required for international wire transfers from 1-5 business days to a few minutes. It uses blockchain technology alongside an AI model for fraud detection checks which eliminate the intermediary banks that are currently involved in the process and make the process incredibly fast.
## How we built it
The main technology which was used was blockchain which essentially decentralizes the data and makes it more secure to the point where it's almost impossible to access that data unless you have the required key. This reduces money laundering and fraud by a huge percentage. We used an AI model to for fraud detection by banks which was made faster by using Cerebras.ai. Apart from that, we used React for the frontend, Python and Flask for the backend, and MongoDB as our database. One of the sponsors' products, PropelAuth was also integrated for user authentication.
## Challenges we ran into
All of us were working on separate parts of the project and integrating them was the toughest task. There were a lot of issues that arrived while doing that. The blockchain technology was also pretty new to us and we had no prior experience with it, so a lot of our time went it learning about it and brainstorming on what we wanted to build. There were some struggles with the capital one api as well but we were able to tackle them with relative ease as compared to the others. Even though the struggles were there, we still found a way through them and came about learning so much new stuff.
## Accomplishments that we're proud of
Implementing the blockchain was our biggest achievement since that domain was new for all of us. Using Cerebras.ai's quick inference capabilites to make the fraud detection checks stronger was another one of the big things. Finally, integrating everything together was the hardest part, but we fought through it together and were able to finally come out on top.
## What we learned
We were able to learn about so many new technologies in which we had no prior experience such as blockchain and AI models. We got an understanding as to why blockchain is such a strong tool which is currently being used in the tech industry and why it's so powerful. We learnt about various other products like PropelAuth and Tune.AI while learning a lot about the financial sector and how technology can be used to help companies grow. All of us specialized in certain fields but we ended up learning way more and expanding our knowledge.
## What's next for BlockWire
BlockWire's future lies in expanding its blockchain-based payment solutions to address various global financial challenges. We plan to enter the remittance market, offering migrant workers a faster, more cost-effective way to send money home.
Education will remain a key focus, as we aim to partner with more universities globally to simplify fee payments for international students. To support this growth, we'll prioritize working closely with financial regulators to ensure compliance and potentially shape policies around blockchain-based international transfers.
We also plan on selling this product to apartment complexes in the country so that the time taken for rent payment and processing can also be reduced.
Lastly, we see potential in integrating our technology with e-commerce platforms, facilitating instant international payments for buyers and sellers in the growing global online marketplace. Through these strategic expansions, BlockWire aims to revolutionize international financial transactions across multiple sectors, making them more accessible, efficient, and cost-effective.
|
## Inspiration
In a world bustling with food options, choosing the perfect meal can be a daunting task, especially when you have specific dietary preferences and allergies. Our hackathon project, "I'Menu," aims to revolutionize the dining experience by putting the power of informed choice in your hands.
Imagine walking into a restaurant, scanning the menu, and effortlessly finding dishes tailored to your taste and dietary needs. With I'Menu, this becomes a reality. Our mobile app allows you to snap a picture of the menu, and in seconds, it analyzes the offerings, taking your preferences and allergies into account. The result? A curated list of menu recommendations just for you.
No more struggling to decode complex menus or worrying about hidden allergens. I'Menu simplifies the dining process, making it an enjoyable and stress-free experience for everyone. From gluten-free to vegan, from spicy to sweet, I'Menu empowers you to explore culinary delights with confidence and ease.
Join us on this journey as we harness the power of technology to enhance the way we dine. I'Menu - where a simple snapshot transforms your dining adventure into an extraordinary culinary journey.
## What it does
I'Menu is a mobile app that simplifies the dining experience by allowing users to take a picture of a menu and, based on their preferences and allergies, provides a curated list of menu recommendations. It uses optical character recognition (OCR) technology to extract text from menu image. But we don't stop here; we go a step further.
Upon opening the app, users are able to uses a user-friendly camera interface for menu scanning and an immersive user profile setup. Users also have the flexibility to modify their preferences and dietary information at any time. We understand that dining preferences are deeply personal, and that's why we believe in I'Menu to tailor your favorite dishes to you.
So in our user profile section, we invite users to provide a bit more about themselves, including details like age, height, weight, and even their religious dietary restrictions.These personal details serve a crucial role in refining your dining recommendations. By understanding your dietary needs, we can create a list of menu items that are not only delicious but also align with your dietary and religious requirements.
Once we've extracted the menu text and obtained the user data, our powerful recommendation engine gets to work. It applies sophisticated algorithms that factor in your preferences and allergies to generate a personalized selection of menu suggestions. Whether you're looking for a gluten-free option, a dish that aligns with your faith, or simply a delightful culinary adventure, I'Menu has you covered.
## How we built it:
1. User Interface Design:
* Designed a user-friendly mobile app interface with screens for menu scanning and preferences/allergies input.
* Created a camera interface for users to take pictures of menus.
2. User Personal Inforamation:
* Developed a user profile system within the app.
* Allowed users to input their dietary preferences and allergies, storing this information for future recommendations.
3. Image Recognition:
* Utilized OCR technology, specifically the Python library Tesseract, to extract text from menu images.
4. Menu Data Processing:
* Processed and cleaned the extracted text to structure it into a usable format.
* Identified and categorized menu items and using open ai to provide the list of ingredients.
5. Database Integration:
* Store and manage menu data, user profiles, and recommendation data in a database.
6. Recommendation Engine:
* Implemented a recommendation engine that factors in user preferences and allergies.
* Developed algorithms that can filter menu items containing allergens.
* Depending on complexity, integrated machine learning models that learn user preferences over time.
7. Integration Testing:
* Test the integration of different components to ensure that the menu recognition, user preferences, and recommendations work together.
8. Deployment:
* Deploy the application to mobile app
## Challenges we ran into:
1. OCR Accuracy: Overcoming OCR's limitations, such as accurately extracting text from images with varying quality and fonts.
2. Data Quality: Dealing with unstructured menu data and irregularities in menu layouts.
3. Algorithm Complexity:Balancing the simplicity of filtering allergens with the complexity of machine learning for personalized recommendations.
4. Technical difficulty: Establishing Seamless Data Transfer from the Backend to the Frontend, Considering the Use of Separate Applications and functinality for Each.
## Accomplishments that we're proud of:
1. Successfully implemented OCR to extract text from menu images.
2. Imported and structured menu data effectively, making it usable.
3. Developed a user profile system for storing preferences and allergies.
4. Created a functional recommendation engine, adapting to user preferences and allergies.
5. Designed a user-friendly mobile app interface for a better user experience.
## What we learned:
1. Advanced our understanding of OCR technology and its applications.
2. Gained insights into data processing and structuring for menu data.
3. Improved our skills in app development, user interface design, and creating a recommendation engine.
4. Learned about the challenges and complexities of handling user data and privacy.
## What's next:
1. Implement machine learning to enhance personalized recommendations.
2. Incorporate user feedback and ratings to improve the recommendation engine.
3. Expand the app's coverage to a wider range of restaurants and cuisines.
4. Collaborate with restaurant owners to provide real-time menu updates.
5. Explore options for monetization and partnerships within the restaurant industry.
6. Enhance the security and privacy measures for user data, particularly to the user personal information.
I'Menu represents a significant step toward enhancing the dining experience by simplifying menu choices, accommodating dietary restrictions, and providing users with tailored recommendations.
|
partial
|
# muse4muse
**Control a Sphero ball with your mind.**
Muse will measure your brain waves.
Depending on the magnitude of the wave, the color of the Sphero will change color!
Alpha -> Green,
Beta -> Blue,
Delta -> Red,
Theta ->Yellow,
Gamma ->White
When the player keeps calm, increasing the Alpha wave, the Sphero ball will move forward.
When the player blinks his/her eyes, the ball will rotate clockwise.
The goal of the player is to control his/her mind and make the Sphero ball through the maze.
Come find Jenny&Youn and try it out!
---
This is an iOS app built with Objective-C, Sphero SDK, and Muse SDK.
Challenges we had:
-This is our first time using Objective-C as well as the two SDK's.
-Originally we made this game super hard and had to adjust the level.
-Because we didn't get any sleep, it was hard to control our own minds to test the game! but we did it! :D
Interesting fact:
* Muse can bring more information than the 5 types of brainwaves. However we decided not to use them because we felt those were irrelevant to our project.
|
## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless
|
# Catch! (Around the World)
## Our Inspiration
Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing?
## What it does
Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch.
## How we built it
For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server.
## Challenges we ran into
Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon.
This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve.
There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code.
## Accomplishments
* Working Unity application with AR
* Use of EchoAR and integrating with our application
* Learning how to use Firebase
* Creating a working chat application between multiple users
|
winning
|
## Inspiration
Our journey began with a clear vision: to transform the landscape of collaborative learning. Recognizing the limitations of existing educational platforms, we were inspired to create uStudy—a space where technology meets education to provide personalized support through AI and foster a community of learners. The drive came from witnessing students' struggles to find effective study groups and resources tailored to their specific courses, sparking the idea to integrate specialized chatbots and discussion rooms into one seamless platform. The speech that the presenter made about wanting to revolutionize education at uOttawa and make a vision for 2030 was also a big inspiration as we would like to give back to our university and make sure that future students have better education.
## What it does
uStudy revolutionizes the way students collaborate and learn. At its core, the platform features:
-Specialized Chatbots: AI-powered, course-specific chatbots provide instant assistance, answering queries and aiding study sessions.
-Discussion Rooms: Dynamic spaces that professors can customize for each course, enhancing targeted learning and peer interaction.
-Course dashboard: A centralized hub displaying recent grades and upcoming assignments with looming deadlines, designed to keep students on track with their academic responsibilities.
## How we built it
uStudy was brought to life through a combination of modern web technologies and AI. We leveraged Bootstrap for a responsive and intuitive frontend, ensuring accessibility across devices. The backend, powered by Flask with Python, handles real-time data processing and integrates our AI chatbots, trained on course-specific material using RAG technology. The synergy of these technologies provided the foundation for a platform that is both powerful and user-friendly.
## Challenges we ran into
The development of uStudy was not without its hurdles. Training AI chatbots to accurately understand and respond to a wide array of academic topics required extensive data curation. Implementing a scalable system for the dynamic creation and management of discussion rooms tested our engineering mettle. Additionally, developing a user-friendly dashboard that accurately represents complex academic data in real-time presented a significant design challenge.
## Accomplishments that we're proud of
We take pride in several key accomplishments:
-Creating a Unified Learning Environment: Successfully integrating chatbots, discussion rooms, and a dashboard into a cohesive platform.
-AI Implementation: Developing sophisticated AI chatbots that can accurately assist with course-specific inquiries.
-User Engagement: Building a platform that not only meets the technical specifications but is also engaging and easy to use for students.
## What we learned
This project was a profound learning experience. We gained deeper insights into web development, AI, and the intricacies of creating a platform that serves diverse educational needs. The importance of user feedback in shaping a user-centric design and the challenges of AI in educational contexts were key lessons that will guide our future endeavors.
## What's next for uStudy
The future of uStudy is bright and full of potential. We plan to:
* Professor participation: Create a professor portal where they can modify the content of their courses and train the AI models with more content.
-Expand Course Content: Enrich our database to cover more disciplines and courses.
-Enhance AI Capabilities: Improve the accuracy and responsiveness of our chatbots.
-Grow the Community: Foster a larger, more active user base to enhance collaborative learning.
-Incorporate Feedback: Continuously refine the platform based on user input, ensuring uStudy remains at the forefront of educational innovation.
|
## Inspiration
The inspiration for building this project likely stemmed from the desire to empower students and make learning coding more accessible and engaging. It combines AI technology with education to provide tailored support, making it easier for students to grasp coding concepts. The goal may have been to address common challenges students face when learning to code, such as doubts and the need for personalized resources. Overall, the project's inspiration appears to be driven by a passion for enhancing the educational experience and fostering a supportive learning environment.
## What it does
The chatbot project is designed to cater to a range of use cases, with a clear hierarchy of priorities. At the highest priority level, the chatbot serves as a real-time coding companion, offering students immediate and accurate responses and explanations to address their coding questions and doubts promptly. This ensures that students can swiftly resolve any coding-related issues they encounter. Moving to the medium priority use case, the chatbot provides personalized learning recommendations. By evaluating a student's individual skills and preferences, the chatbot tailors its suggestions for learning resources, such as tutorials and practice problems. This personalized approach aims to enhance the overall learning experience by delivering materials that align with each student's unique needs. At the lowest priority level, the chatbot functions as a bridge, facilitating connections between students and coding mentors. When students require more in-depth assistance or guidance, the chatbot can help connect them with human mentors who can provide additional support beyond what the chatbot itself offers. This multi-tiered approach reflects the project's commitment to delivering comprehensive support to students learning to code, spanning from immediate help to personalized recommendations and, when necessary, human mentorship.
## How we built it
The development process of our AI chatbot involved a creative integration of various Language Models (**LLMs**) using an innovative technology called **LangChain**. We harnessed the capabilities of LLMs like **Bard**, **ChatGPT**, and **PaLM**, crafting a robust pipeline that seamlessly combines all of them. This integration forms the core of our powerful AI bot, enabling it to efficiently handle a wide range of coding-related questions and doubts commonly faced by students. By unifying these LLMs, we've created a chatbot that excels in providing accurate and timely responses, enhancing the learning experience for students.
Moreover, our project features a **centralized database** that plays a pivotal role in connecting students with coding mentors. This database serves as a valuable resource, ensuring that students can access the expertise and guidance of coding mentors when they require additional assistance. It establishes a seamless mechanism for real-time interaction between students and mentors, fostering a supportive learning environment. This element of our project reflects our commitment to not only offer AI-driven solutions but also to facilitate meaningful human connections that further enrich the educational journey.
In essence, our development journey has been marked by innovation, creativity, and a deep commitment to addressing the unique needs of students learning to code. By integrating advanced LLMs and building a robust infrastructure for mentorship, we've created a holistic AI chatbot that empowers students and enhances their coding learning experience.
## Challenges we ran into
Addressing the various challenges encountered during the development of our AI chatbot project involved a combination of innovative solutions and persistent efforts. To conquer integration complexities, we invested substantial time and resources in research and development, meticulously fine-tuning different Language Models (LLMs) such as Bard, ChatGPT, and Palm to work harmoniously within a unified pipeline. Data quality and training challenges were met through an ongoing commitment to curate high-quality coding datasets and an iterative training process that continually improved the chatbot's accuracy based on real-time user interactions and feedback.
For real-time interactivity, we optimized our infrastructure, leveraging cloud resources and employing responsive design techniques to ensure low-latency communication and enhance the overall user experience. Mentor matching algorithms were refined continuously, considering factors such as student proficiency and mentor expertise, making the pairing process more precise. Ethical considerations were addressed by implementing strict ethical guidelines and bias audits, promoting fairness and transparency in chatbot responses.
User experience was enhanced through user-centric design principles, including usability testing, user interface refinements, and incorporation of user feedback to create an intuitive and engaging interface. Ensuring scalability involved the deployment of elastic cloud infrastructure, supported by regular load testing and optimization to accommodate a growing user base.
Security was a paramount concern, and we safeguarded sensitive data through robust encryption, user authentication protocols, and ongoing cybersecurity best practices, conducting regular security audits to protect user information. Our collective dedication, collaborative spirit, and commitment to excellence allowed us to successfully navigate and overcome these challenges, resulting in a resilient and effective AI chatbot that empowers students in their coding education while upholding the highest standards of quality, security, and ethical responsibility.
## Accomplishments that we're proud of
Throughout the development and implementation of our AI chatbot project, our team has achieved several accomplishments that we take immense pride in:
**Robust Integration of LLMs:** We successfully integrated various Language Models (LLMs) like Bard, ChatGPT, and Palm into a unified pipeline, creating a versatile and powerful chatbot that combines their capabilities to provide comprehensive coding assistance. This accomplishment showcases our technical expertise and innovation in the field of natural language processing.
**Real-time Support**: We achieved the goal of providing real-time coding assistance to students, ensuring they can quickly resolve their coding questions and doubts. This accomplishment significantly enhances the learning experience, as students can rely on timely support from the chatbot.
**Personalized Learning Recommendations**: Our chatbot excels in offering personalized learning resources to students based on their skills and preferences. This accomplishment enhances the effectiveness of the learning process by tailoring educational materials to individual needs.
**Mentor-Matching Database**: We established a centralized database for coding mentors, facilitating connections between students and mentors when more in-depth assistance is required. This accomplishment emphasizes our commitment to fostering meaningful human connections within the digital learning environment.
**Ethical and Bias Mitigation**: We implemented rigorous ethical guidelines and bias audits to ensure that the chatbot's responses are fair and unbiased. This accomplishment demonstrates our dedication to responsible AI development and user fairness.
**User-Centric Design**: We created an intuitive and user-friendly interface that simplifies the interaction between students and the chatbot. This user-centric design accomplishment enhances the overall experience for students, making the learning process more engaging and efficient.
**Scalability**: Our chatbot's architecture is designed to scale efficiently, allowing it to accommodate a growing user base without compromising performance. This scalability accomplishment ensures that our technology remains accessible to a broad audience.
**Security Measures**: We implemented robust security protocols to protect user data, ensuring that sensitive information is safeguarded. Regular security audits and updates represent our commitment to user data privacy and cybersecurity.
These accomplishments collectively reflect our team's dedication to advancing education through technology, providing students with valuable support, personalized learning experiences, and access to coding mentors. We take pride in the positive impact our AI chatbot has on the educational journey of students and our commitment to responsible and ethical AI development.
## What we learned
The journey of developing our AI chatbot project has been an enriching experience, filled with valuable lessons that have furthered our understanding of technology, education, and teamwork. Here are some of the key lessons we've learned:
**Complex Integration Requires Careful Planning**: Integrating diverse Language Models (LLMs) is a complex task that demands meticulous planning and a deep understanding of each model's capabilities. We learned the importance of a well-thought-out integration strategy.
**Data Quality Is Paramount**: The quality of training data significantly influences the chatbot's performance. We've learned that meticulous data curation and continuous improvement are essential to building an accurate AI model.
**Real-time Interaction Enhances Learning**: The ability to provide real-time coding assistance has a profound impact on the learning experience. We learned that prompt support can greatly boost students' confidence and comprehension.
**Personalization Empowers Learners**: Tailoring learning resources to individual students' needs is a powerful way to enhance education. We've discovered that personalization leads to more effective learning outcomes.
**Mentorship Matters**: Our mentor-matching database has highlighted the importance of human interaction in education. We learned that connecting students with mentors for deeper assistance is invaluable.
Ethical AI Development Is Non-Negotiable: Addressing ethical concerns and bias in AI systems is imperative. We've gained insights into the importance of transparent, fair, and unbiased AI interactions.
**User Experience Drives Engagement**: A user-centric design is vital for engaging students effectively. We've learned that a well-designed interface improves the overall educational experience.
**Scalability Is Essential for Growth**: Building scalable infrastructure is crucial to accommodate a growing user base. We've learned that the ability to adapt and scale is key to long-term success.
**Security Is a Constant Priority**: Protecting user data is a fundamental responsibility. We've learned that ongoing vigilance and adherence to best practices in cybersecurity are essential.
**Teamwork Is Invaluable**: Collaborative and cross-disciplinary teamwork is at the heart of a successful project. We've experienced the benefits of diverse skills and perspectives working together.
These lessons have not only shaped our approach to the AI chatbot project but have also broadened our knowledge and understanding of technology's role in education and the ethical responsibilities that come with it. As we continue to develop and refine our chatbot, these lessons serve as guideposts for our future endeavors in enhancing learning and supporting students through innovative technology.
## What's next for ~ENIGMA
The journey of our AI chatbot project is an ongoing one, and we have ambitious plans for its future:
**Continuous Learning and Improvement**: We are committed to a continuous cycle of learning and improvement. This includes refining the chatbot's responses, expanding its knowledge base, and enhancing its problem-solving abilities.
**Advanced AI Capabilities**: We aim to incorporate state-of-the-art AI techniques to make the chatbot even more powerful and responsive. This includes exploring advanced machine learning models and technologies.
**Expanded Subject Coverage**: While our chatbot currently specializes in coding, we envision expanding its capabilities to cover a wider range of subjects and academic disciplines, providing comprehensive educational support.
**Enhanced Personalization**: We will invest in further personalization, tailoring learning resources and mentor matches even more closely to individual student needs, preferences, and learning styles.
**Multi-Lingual Support**: We plan to expand the chatbot's language capabilities, enabling it to provide support to students in multiple languages, making it accessible to a more global audience.
**Mobile Applications**: Developing mobile applications will enhance the chatbot's accessibility, allowing students to engage with it on their smartphones and tablets.
**Integration with Learning Management Systems**: We aim to integrate our chatbot with popular learning management systems used in educational institutions, making it an integral part of formal education.
**Feedback Mechanisms**: We will implement more sophisticated feedback mechanisms, allowing users to provide input that helps improve the chatbot's performance and user experience.
**Research and Publication**: Our team is dedicated to advancing the field of AI in education. We plan to conduct research and contribute to academic publications in the realm of AI-driven educational support.
**Community Engagement**: We are eager to engage with the educational community to gather insights, collaborate, and ensure that our chatbot remains responsive to the evolving needs of students and educators.
In essence, the future of our project is marked by a commitment to innovation, expansion, and a relentless pursuit of excellence in the realm of AI-driven education. Our goal is to provide increasingly effective and personalized support to students, empower educators, and contribute to the broader conversation surrounding AI in education.
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
losing
|
**[DEMO VIDEO LINK — On Loom!](https://www.loom.com/share/2b15581b7fe34ec7974c26f24ad3829f?sid=87ea8fee-5efa-46c1-984f-bb6bab8d664a)** ✨
## Inspiration
We were inspired by how Wikipedia links to other Wikipedia pages so that you can find the information quickly and painlessly if you don't know what a word means. We felt this technology would save people a lot of time parsing jargon-heavy documents, and help you *understand* your readings for school!
## What it does
Source Scanner takes raw text, and intelligently selects words/phrases that can be reasonably assumed to need definition or extra information, and makes it so that we can hover over those words and find helpful definitions, as well as links off-site to places such as Wikipedia for further reading!✨
We know that when you're reading, it takes a while for your brain to really process and understand words, and on occasion, we "gloss" over words we can usually infer through context. While this helps our comprehension skills, we want to make information more accessible for students to *actually* learn new vocabulary and concepts that they may not have learned on their own!
## How we built it 👩🏻🔬
On the backend, when a user makes a request for our web app to parse input and return more information, our app sends the user-inputted information to OpenAI's API with an engineered prompt to parse the user input for words and concepts that may need to be defined and puts those, along with definitions, into an easily parse-able data structure. From there, each of those concepts is fed into [Metaphor's API](https://metaphor.systems/) to generate relevant links, and the best link by confidence score is displayed as a hyperlink to the user upon hovering over the previously selected word, along with a definition. We use MongoDB to store user information, Google Cloud for user authentication, and Vercel for website deployment.
* ReactTS
* TailwindCSS
* NextJS
* tRPC
* MongoDB
* Prisma
* Google Cloud
* Metaphor API
* OpenAI API
## Challenges we ran into
One of the main challenges our group ran into was integrating so many moving parts and getting them to work together seamlessly. Getting OAuth with Google Cloud to work with tRPC, Prisma, and MongoDB Atlas was quite the challenge with the different ways they interact with each other. Another hurdle was prompt engineering to request meaningful data from ChatGPT and Metaphor because there's a lot of specificity and accuracy that need to come together to get something that would benefit the user. Finally, having specific UI components, such as tooltips, actually helps, rather than hinders, the user experience was a definite learning curve.
## Accomplishments that we're proud of
We are very proud of the fact that we not only successfully achieved a design with a large number of moving parts, but we did so while building out several well-designed pages that are easily understandable for site users.
## What we learned 🫶🏼
We learned a lot about working with the OpenAI and Metaphor APIs, as well as the power of a good prompt and a better AI when it came time to parse through the input. We also appreciated the opportunity to refamiliarize ourselves with React and design elements of websites.
## What's next for SourceScanner ⚡️⚡️
The next step for SourceScanner is adding more of what we call "lenses," or areas of the internet to search for answers. We only have one (the "Wikipedia" lens), but we plan to expand to other prompts and websites for more niche information, such as C or C++ documentation.
|
## Inspiration
To do our part to spread awareness and inspire the general public to make adjustments that will improve everyone's air quality. Also wanted to demonstrate that these adjustments are not as challenging and in our simulator, it shows that frequent small top ups go a long way.
## What it does
Our website includes information about EV and a simulation game where you have to drive past EV charging stations for quick top ups otherwise the vehicle will slow down to a crawl. EV stations come up fairly frequent, weather it be a regular wall socket or supercharger station.
## How we built it
Our website was built on repl.it, where one of us was working on a game while the other used html/css to create a website. After a domain was chosen from domain.com, we started to learn how to create a website using HTML. For some parts, code was taken from free html templates and were later manipulated in an HTML editor. Afterwards, google cloud was used to host the website, forcing us to learn how to use servers.
## Challenges we ran into
For starters, almost everything was new for all of us, from learning HTML to learning how to host off of a server. As new coders, we had to spend many hours learning how to code before we do anything. Once that happens we had to spend many hours testing code to see if it produced the wanted result. After all that was over, we had to learn how to use google cloud, our first experience with servers.
## Accomplishments that we're proud of
Actually having a working website, and having the website be hosted.
## What we learned
HTML, CSS, JS, Server hosting.
## What's next for EVolving Tech
We want to add destinations to give our simulation more complexity and context. This will allow the users to navigate between points of interest in their home city to get a feel of how range measures up to level of charge.
|
## Inspiration
During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before.
## What It Does
Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions.
## How We Built It
We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML.
Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API.
We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions.
After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React.
## Challenges We Ran Into
From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models.
We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities.
## What We Learned
We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time.
On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work)
## What’s Next for Hack2
The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward.
|
partial
|
## Inspiration
Wondering what Vine, Tinder and YikYak would be like when combined together.
## What it does
Users can take and upload videos, or explore videos uploaded nearby in a unique UI.
## How I built it
Android app with Node.js back end
## Challenges I ran into
Video streaming.
## Accomplishments that I'm proud of
Getting video streaming to work
## What I learned
How to video stream
## What's next for QuickVid
IDK
|
## Inspiration
```
We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do.
```
## What it does
```
Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams.
```
## How we built it
```
We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application.
```
## Challenges we ran into
```
This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application!
```
## What we learned
```
We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers.
```
## What's next for Discotheque
```
If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music.
```
|
## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
|
partial
|
## Inspiration
Beautiful stationery and binders filled with clips, stickies, and colourful highlighting are things we miss from the past. Passing notes and memos and recognizing who it's from just from the style and handwriting, holding the sheet in your hand, and getting a little personalized note on your desk are becoming a thing of the past as the black and white of emails and messaging systems take over. Let's bring back the personality, color, and connection opportunities from memo pads in the office while taking full advantage of modern technology to make our lives easier. Best of both worlds!
## What it does
Memomi is a web application for offices to simplify organization in a busy environment while fostering small moments of connection and helping fill in the gaps on the way. Using powerful NLP technology, Memomi automatically links related memos together, suggests topical new memos to expand on missing info on, and allows you to send memos to other people in your office space.
## How we built it
We built Memomi using Figma for UI design and prototyping, React web apps for frontend development, Flask APIs for the backend logic, and Google Firebase for the database. Cohere's NLP API forms the backbone of our backend logic and is what powers Memomi's intelligent suggestions for tags, groupings, new memos, and links.
## Challenges we ran into
With such a dynamic backend with more complex data, we struggled to identify how best to organize and digitize the system. We also struggled a lot with the frontend because of the need to both edit and display data annotated at the exact needed positions based off our information. Connecting our existing backend features to the frontend was our main barrier to showing off our accomplishments.
## Accomplishments that we're proud of
We're very proud of the UI design and what we were able to implement in the frontend. We're also incredibly proud about how strong our backend is! We're able to generate and categorize meaningful tags, groupings, and links between documents and annotate text to display it.
## What we learned
We learned about different NLP topics, how to make less rigid databases, and learned a lot more about advanced react state management.
## What's next for Memomi
We would love to implement sharing memos in office spaces as well as authorization and more text editing features like markdown support.
|
## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
|
## Inspiration
The inspiration for an app like Memo stems from a desire to address common challenges people face in budgeting and expense tracking. The idea originates from observing the inconvenience of manual data entry or the difficulty in keeping an accurate record of daily expenses. Additionally, the increasing reliance on digital receipts and the need for a more intuitive and efficient solution to manage finances are key motivators.
## What it does
Memo takes snapshots of receipts using the webcam, processing the data and returning insightful feedback on spending habits. Details include time and location of purchases, price of purchases, and name of stores. Memo then displays the information in a visually appealing user interface for users to easily analyze their spending.
## How we built it
We built it with Flask, Python, SQL, Tailwind, CSS, React, tesseract, and opencv.
## Challenges we ran into
We had difficulty setting up the database to store the information from the snapshots of receipts.
## Accomplishments that we're proud of
Even though we had little experience with backend development, we created a functioning app that integrated databases and backend technologies such as flask.
## What we learned
We learned Flask, Python, SQL, Tailwind, CSS, React, tesseract, opencv, and we learned to COLLABORATE.
## What's next for Memo
We plan on improving the data visualization for Memo by leveraging Google Maps API. For a better and more intuitive user experience, Memo will display the users' purchases through a 3D map of the store locations, providing insightful data on the time and dates of each purchase.
|
winning
|
## Inspiration
The devastation caused by recent hurricanes in Florida highlighted the shortcomings in post-disaster relief efforts. Thousands of people were left without homes and access to essential resources. As we watched these events unfold, we felt compelled to dig deeper into the problem and understand how we could make a difference. With communities suffering due to slow and inefficient resource allocation, we knew there had to be a better way to help. This realization inspired us to create Just Hurry!, a platform dedicated to transforming disaster relief efforts by providing faster and more efficient support to those in need.
## What it does
Just Hurry! is a comprehensive platform designed to bridge the gap between disaster relief organizations and the vulnerable communities affected by hurricanes. Using real-time data and predictive ML modeling, Just Hurry! identifies the regions most in need of immediate assistance, based on factors like socioeconomic vulnerability, historical hurricane data, air pressure, wind speed, and population density. Organizations can register their available resources, such as food, water, and medical supplies, while the platform ensures these resources are allocated efficiently based on real-time requests from users in affected areas. In addition to resource allocation, Just Hurry! allows individuals to request emergency help or sign up as volunteers to provide aid where it’s most needed.
## How we built it
We built Just Hurry! by leveraging advanced data analytics and machine learning to create a custom model that calculates real-time risk factors for different communities. Using Palantir’s Foundry, we analyzed critical variables such as air pressure, wind speed, and population density to predict the impact of hurricanes and determine which areas would be most at risk. We then developed a platform with a seamless user interface, making it simple for organizations to register their resources and for users to request help or volunteer. Throughout the development process, we ensured that Just Hurry! prioritizes user experience, data accuracy, and the efficient distribution of resources.
## Challenges we ran into
One of the biggest challenges we faced was accurately predicting the areas that would be most affected by hurricanes. With so many variables to consider, including weather patterns, socioeconomic factors, and historical data, developing a model that could consistently provide accurate results required extensive research and testing. Additionally, building a platform capable of handling real-time data and ensuring that resources are efficiently allocated presented technical challenges. Balancing the need for a user-friendly interface while managing the backend complexities of resource allocation and disaster response was another major hurdle we had to overcome.
## Accomplishments that we're proud of
We’re proud to have built a tool that can genuinely make a difference in the way disaster relief efforts are conducted. Just Hurry! is more than just a platform; it’s a solution that can save lives and help communities recover faster. We successfully developed a real-time model that predicts hurricane impact zones and created a platform that optimizes the allocation of resources. Furthermore, we’re proud of our ability to streamline communication between organizations and those in need, ensuring that help reaches the right people at the right time.
## What we learned
Throughout this project, we learned a great deal about the challenges of disaster response and the importance of data in making informed decisions. We discovered that while technology can’t prevent natural disasters, it can play a significant role in mitigating their effects by improving the speed and efficiency of relief efforts. We also learned about the complexities of resource management and the importance of collaboration between relief organizations and communities. This project taught us the value of persistence and innovation when tackling large-scale problems like disaster relief.
## What's next for Just Hurry!
Our vision for Just Hurry! extends beyond hurricane relief. We want to expand the platform to be adaptable for various natural disasters and emergencies, whether it’s earthquakes, floods, or wildfires. Our next steps include refining our predictive model with more data and incorporating partnerships with additional relief organizations to broaden the platform’s reach. We also plan to integrate features that allow for more efficient volunteer coordination and community-driven support systems. Just Hurry! is more than a temporary solution—it’s a long-term vision for a more resilient, prepared society capable of responding swiftly to crises and rebuilding stronger than before.
|
## Inspiration
We were inspired to think about natural disaster relief, recovery, and prevention after talking to IBM Watson representatives. After mulling some ideas over, we came to the realization that visualizing and contextualizing past natural disasters may be useful. Once a natural disaster hits, it is very difficult for outside parties to help due to damaged infrastructure (no Wifi, cellular signal, etc). Thus, it is in everyone's best interest to predict and be prepared for the onset of natural disasters. Achieving this requires understanding regional trends and characteristics, so we set out with the goal of making a comprehensive visualization dashboard in mind.
## What we made
Our web app allows the user to plot side-by-side choropleth heatmaps of the USA across several variables (frequency of floods, earthquakes, forest fires, hurricanes, blizzards, and poverty rate), from 2000 - 2018. It also shows the most commonly traveled to airports from each state (using Amadeus), which may give interesting insights into how human behavior is affected by natural disasters. In general, the app empowers people to make their own informed judgments given years of data, and with more data and visualizations, it will be even more empowering.
## How we built it
We downloaded and processed data from the Federal Emergency Management Agency and US Census. We plot the aggregate number of events as a choropleth map on the state level. The web app uses Dash, a high-level Python framework that wraps Flask and enables easy use of Plotly - a popular graphical library.
## Challenges we ran into
Finding yearly data for occurrence of natural disaster events by state was difficult. Cleaning the data, reformatting it, and setting up Dash also took a while. We found ourselves unable to make additional insights, such as relationships between natural disaster frequency and underlying infrastructural deficiencies, and/or geopolitical factors. There didn't seem to be concrete data enabling these types of extrapolations.
## Accomplishments that we're proud of
We built this in less than 10 hours, after scrapping our initial idea. The graphs look decently aesthetic.
## What's next for NaturalDisasterVis
The app could benefit from other visualizations and insights. For example, some water sources have sensors to monitor water level in real time. If coupled with rain forecasts, we could generate flood predictions. We might be able to predict forest fires by looking at real time humidity and air pollution metrics. We should also add static information (such as what to buy for each type of natural disaster, and who to call).
|
## Inspiration
Neuro-Matter is an integrated social platform designed to combat not one, but 3 major issues facing our world today: Inequality, Neurological Disorders, and lack of information/news.
We started Neuro-Matter with the aim of helping people facing the issue of Inequality at different levels of society. Though it was assumed that inequality only leads to physical violence, its impacts on neurological/mental levels are left neglected.
Upon seeing the disastrous effects, we have realized the need of this hour and have come up with Neuro-Matter to effectively combat these issues in addition to the most pressing issue our world faces today: mental health!
## What it does
1. "Promotes Equality" and provides people the opportunity to get out of mental trauma.
2. Provides a hate-free social environment.
3. Helps People cure the neurological disorder
4. Provide individual guidance to support people with the help of our volunteers.
5. Provides reliable news/information.
6. Have an AI smart chatbot to assist you 24\*7.
## How we built it
Overall, we used HTML, CSS, React.js, google cloud, dialogue flow, google maps Twilio's APIs. We used Google Firebase's Realtime Database to store, organize, and secure our user data. This data is used to login and sign up for the service. The service's backend is made with node.js, which is used to serve the webpages and enable many useful functions. We have multiple different pages as well like the home page, profile page, signup/login pages, and news/information/thought sharing page.
## Challenges we ran into
We had a couple of issues with databasing as the password authentication would work sometimes. Moreover, since we used Visual Studio multiplayer for the first time it was difficult as we faced many VSCode issues (not code related). Since we were working in the same time zones, it was not so difficult for all of us to work together, but It was hard to get everything done on time and have a rigid working module.
## Accomplishments that we're proud of
Overall, we are proud to create a working social platform like this and are hopeful to take it to the next steps in the future as well. Specifically, each of our members is proud of their amazing contributions.
We believe in the module we have developed and are determined to take this forward even beyond the hackathon to help people in real life.
## What we learned
We learned a lot, to say the least!! Overall, we learned a lot about databasing and were able to strengthen our React.js, Machine Learning, HTML, and CSS skills as well. We successfully incorporated Twilio's APIs and were able to pivot and send messages. We have developed a smart bot that is capable of both text and voice-based interaction. Overall, this was an extremely new experience for all of us and we greatly enjoyed learning new things. This was a great project to learn more about platform development.
## What's next for Neuro-Matter
This was an exciting new experience for all of us and we're all super passionate about this platform and can't wait to hopefully unveil it to the general public to help people everywhere by solving the issue of Inequality.
|
losing
|
## Inspiration
The three of us all love music and podcasts. Coming from very diverse backgrounds, we all enjoy listening to content from a variety of places all around the globe. We wanted to design a platform where users can easily find new content from anywhere to enable cultural interconnectivity.
## What it does
TopCharts allows you to place a pin anywhere in the world using our interactive map, and shows you the top songs and podcasts in that region. You then can follow the link directly to Spotify and listen!
## How we built it
We used the MapBox API to display an interactive map, and also reverse GeoLocate the area in which the pin is dropped. We used the Spotify API to query data based on the geolocation. The app itself is built in React and is hosted through Firebase!
## Challenges we ran into
Getting the MapBox API customized to our needs!
## Accomplishments that we're proud of
Making a fully functional website with clean UI/UX within ~30 hours of ideation. We also got to listen to a lot of cool podcasts and songs from around the world while testing!
## What we learned
How robust the MapBox API is. It is so customizable, which we love! We also learned some great UI/UX tips from Grace Ma (Meta)!
## What's next for TopCharts
Getting approval from Spotify for an API quota extension so anyone across the world can use TopCharts!
Team #18 - Ben (benminor#5721), Graham (cracker#4700), Cam (jeddy#1714)
|
## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
|
## Inspiration
Imagine the perfect night out with friends, where every moment is filled with laughter, shared stories, and unforgettable memories. Yet, when it comes to the music, the experience often falls short. Unlike the limited group dynamics of a Spotify Jam, our shared queue with voting transforms the listening experience, making it inclusive and interactive.
It's about making everyone feel heard and creating a musical journey that reflects the tastes and moods of the entire group. By enhancing the group music experience, we're not just playing songs—we're creating moments and memories that resonate long after the music stops.
## What it does
Blast is a PWA (Progressive Web Application) built to reimagine the group music consumption experience.
1. You can start a blast session or join a room.
2. You have the ability to find songs from Spotify, then add a suggestion to be added to the queue
3. Every song on the realtime queue for the room has the ability to be "upvoted" or "downvoted"
4. If over half of the group dislikes your song, you'll be put "on blast"
Utilizing cloud native technology, Blast is able to support an open forum to play, pause, skip, uplift and shame music from Spotify all in realtime
## How we built it
* **Frontend**: Next.js, TypeScript, Tailwind CSS, Shadcn
* **Backend**: Google Cloud Firestore, Spotify API
## Challenges we ran into
* **Spotify API**: Truly a terrible Developer Experience. The documentation is very outdated and it was very difficult to integrate their low level explanations with our modern tech stack
## Accomplishments that we're proud of
We're really, really proud of integrating a lot of different technologies together in a fully functioning, cohesive manner! This project also allowed all of us to step outside our comfort zones in the manner of taking on more responsibility to design, architect, and integrate different technologies together!
## What we learned
Our team was able to dive deep into many full stack practices including:
1. Spotify API debugging hell
2. Handling async operations in a type safe manner
3. NoSQL data modelling
4. OAuth with Spotify involving callbacks
5. Realtime Cloud Native functionalities and architecture
## What's next for Blast
|
winning
|
## Inspiration
We are a team of engineering science students with backgrounds in mathematics, physics and computer science. A common passion for the implementation of mathematical methods in innovative computing contexts and the application of these technologies to physical phenomena motivated us to create this project [Parallel Fourier Computing].
## What it does
Our project is a Discrete Fourier Transform [DFT] algorithm implemented in JavaScript for sinusoid spectral decomposition with explicit support for parallel computing task distribution. This algorithm is called by a web page front-end that allows a user to program the frequency/periodicity of a sum of three sinusoids, see this function on a graphical figure, and to calculate and display the resultant DFT for this sinusoid. The program successfully identifies the constituent fundamental frequencies of a sum of three sinusoids by use of this DFT.
## How We built it
This project was built in parallel, with some team members working on DCL integration, web page front ends and algorithm writing. The DFT algorithm used was initially prototyped in Python before being ported over to JavaScript for integration with the DCL network. We tested the function of our algorithm from a wide range of frequencies and sampling rates within the human spectrum of hearing. All team members contributed to component integration towards the end of the project, ensuring compliance with the DCL method of task distribution.
## Challenges We ran into
Though our team has an educational background in Fourier analysis, we were unfamiliar with the workflows and utilities of parallel computing systems. We were principly concerned with (1) how we can fundamentally divide the job of computing a Discrete Fourier Transform into a set of sequentially uncoupled tasks for parallel processing, and (2) how we implement such an algorithm design in the JavaScript foundation that DCL relies on. Initially, our team struggled to define clearly independent computing tasks that we could offload to parallel processing units to speed up our algorithm. We overcame this challenge when we realized that we could produce analytic functions for any partial sum term in our series and pass these exact functions off for processing in parallel. One challenge we faced when adapting our code to the task distribution method of the DCL system was writing a work function that was entirely self-contained without a dependence on external libraries or extraneously long procedural logic. To avoid library dependency, we wrote our own procedural logic to handle the complex number arithmetic that's needed for a Discrete Fourier Transform.
## Accomplishments that We're proud of
Our team successfully wrote a Discrete Fourier Transform algorithm designed for parallel computing uses. We encoded custom complex number arithmetic operations into a self-contained JavaScript function. We have integrated our algorithm with the DCL task scheduler and built a web page front end with interactive controls to program sinusoid functions and to graph these functions and their Discrete Fourier Transforms. Our algorithm can successfully decompose a sum of sinusoids into its constituent frequency components.
## What We learned
Our team learned about some of the constraints that task distribution in a parallel computing network can have on the procedural logic used in task definitions. Not having access to external JavaScript libraries, for example, required custom encoding of complex number arithmetic operations needed to compute DFT terms. Our team also learned more about how DFTs can be used to decompose musical chords into its fundamental pitches.
## What's next for Parallel Fourier Computing
Next steps for our project in the back-end are to optimize the algorithm to decrease the computation time. On the front-end we would like to increase the utility of the application by allowing the user to play a note and have the algorithm determine the pitches used in making the note.
#### Domain.com submission
Our domain name is <http://parallelfouriercomputing.tech/>
#### Team Information
Team 3: Jordan Curnew, Benjamin Beggs, Philip Basaric
|
## Inspiration
Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills.
## What it does and how we built it
TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance.
## Challenges we ran into
Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques.
## Accomplishments that we're proud of
We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team.
## What we learned
Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users.
## What's next for TRACY
Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
|
## Inspiration
A love of music production, the obscene cost of synthesizers, and the Drive soundtrack
## What it does
In its simplest form, it is a synthesizer. It creates a basic wave using wave functions and runs it through a series of custom filters to produce a wide range of sounds. Finally the sounds are bound to a physical "keyboard" made using an Arduino.
## How we built it
The input driver and function generator ares written in python using numpy and pyaudio libraries for calculating wave functions and to output the result to audio output..
## Challenges we ran into
-pyaudio doesn't play nice with multiprocessing
-multithreading wasn't as good an option because it doesn't properly parallelize due to Python's GIL
-parsing serial input from a constant stream lead to a few issues
## What we learned
We learned a lot about realtime signal processing, the performance limitations of Python and the ins and outs of creating an controller device from the hardware level to driver software.
## What's next for Patch Cable
-We'd like to rewrite the signal processing in a faster language. Python couldn't keep up with realtime transformation as well was we would have liked.
-We'd like to add a command line and visual interface for making the function chains to make it easier to make sounds as you go.
|
winning
|
## Inspiration
The internet is filled with user-generated content, and it has become increasingly difficult to manage and moderate all of the text that people are producing on a platform. Large companies like Facebook, Instagram, and Reddit leverage their massive scale and abundance of resources to aid in their moderation efforts. Unfortunately for small to medium-sized businesses, it is difficult to monitor all the user-generated content being posted on their websites. Every company wants engagement from their customers or audience, but they do not want bad or offensive content to ruin their image or the experience for other visitors. However, hiring someone to moderate or build an in-house program is too difficult to manage for these smaller businesses. Content moderation is a heavily nuanced and complex problem. It’s unreasonable for every company to implement its own solution. A robust plug-and-play solution is necessary that adapts to the needs of each specific application.
## What it does
That is where Quarantine comes in.
Quarantine acts as an intermediary between an app’s client and server, scanning the bodies of incoming requests and “quarantining” those that are flagged. Flagging is performed automatically, using both pretrained content moderation models (from Azure and Moderation API) as well as an in house machine learning model that adapts to specifically meet the needs of the application’s particular content. Once a piece of content is flagged, it appears in a web dashboard, where a moderator can either allow or block it. The moderator’s labels are continuously used to fine tune the in-house model. Together with this in house model and pre-trained models a robust meta model is formed.
## How we built it
Initially, we built an aggregate program that takes in a string and runs it through the Azure moderation and Moderation API programs. After combining the results, we compare it with our machine learning model to make sure no other potentially harmful posts make it through our identification process. Then, that data is stored in our database. We built a clean, easy-to-use dashboard for the grader using react and Material UI. It pulls the flagged items from the database and then displays them on the dashboard. Once a decision is made by the person, that is sent back to the database and the case is resolved. We incorporated this entire pipeline into a REST API where our customers can pass their input through our programs and then access the flagged ones on our website.
Users of our service don’t have to change their code, simply they append our url to their own API endpoints. Requests that aren’t flagged are simply instantly forwarded along.
## Challenges we ran into
Developing the in house machine learning model and getting it to run on the cloud proved to be a challenge since the parameters and size of the in house model is in constant flux.
## Accomplishments that we're proud of
We were able to make a super easy to use service. A company can add Quarantine with less than one line of code.
We're also proud of adaptive content model that constantly updates based on the latest content blocked by moderators.
## What we learned
We learned how to successfully integrate an API with a machine learning model, database, and front-end. We had learned each of these skills individually before, but we has to figure out how to accumulate them all.
## What's next for Quarantine
We have plans to take Quarantine even further by adding customization to how items are flagged and taken care of. It is proven that there are certain locations that spam is commonly routed through so we could do some analysis on the regions harmful user-generated content is coming from. We are also keen on monitoring the stream of activity of individual users as well as track requests in relation to each other (detect mass spamming). Furthermore, we are curious about adding the surrounding context of the content since it may be helpful in the grader’s decisions. We're also hoping to leverage the data we accumulate from content moderators to help monitor content across apps using shared labeled data behind the scenes. This would make Quarantine more valuable to companies as it monitors more content.
|
## Inspiration
Have you ever met someone, but forgot their name right afterwards?
Our inspiration for INFU comes from our own struggles to remember every detail of every conversation. We all deal with moments of embarrassment or disconnection when failing to remember someone’s name or details of past conversations.
We know these challenges are not unique to us, but actually common across various social and professional settings. INFU was born to bridge the gap between our human limitations and the potential for enhanced interpersonal connections—ensuring no details or interactions are lost to memory again.
## What it does
By attaching a camera and microphone to a user, we can record different conversations with people by transcribing the audio and categorizing using facial recognition. With this, we can upload these details onto a database and have it summarised by an AI and displayed on our website and custom wrist wearable.
## How we built it
There are three main parts to the project. The first part is the hardware which includes all the wearable components. The second part includes face recognition and speech-to-text processing that receives camera and microphone input from the user's iPhone. The third part is storing, modifying, and retrieving data of people's faces, names, and conversations from our database.
The hardware comprises an ESP-32, an OLED screen, and two wires that act as touch buttons. These touch buttons act as record and stop recording buttons which turn on and off the face recognition and microphone. Data is sent wirelessly via Bluetooth to the laptop which processes the face recognition and speech data. Once a person's name and your conversation with them are extracted from the current data or prior data from the database, the laptop sends that data to the wearable and displays it using the OLED screen.
The laptop acts as the control center. It runs a backend Python script that takes in data from the wearable via Bluetooth and iPhone via WiFi. The Python Face Recognition library then detects the speaker's face and takes a picture. Speech data is subsequently extracted from the microphone using the Google Cloud Speech to Text API which is then parsed through the OpenAI API, allowing us to obtain the person's name and the discussion the user had with that person. This data gets sent to the wearable and the cloud database along with a picture of the person's face labeled with their name. Therefore, if the user meets the person again, their name and last conversation summary can be extracted from the database and displayed on the wearable for the user to see.
## Accomplishments that we're proud of
* Creating an end product with a complex tech stack despite various setbacks
* Having a working demo
* Organizing and working efficiently as a team to complete this project over the weekend
* Combining and integrating hardware, software, and AI into a project
## What's next for Infu
* Further optimizing our hardware
* Develop our own ML model to enhance speech-to-text accuracy to account for different accents, speech mannerisms, languages
* Integrate more advanced NLP techniques to refine conversational transcripts
* Improve user experience by employing personalization and privacy features
|
## Inspiration
This past year, we've seen the effects of uncontrolled algorithmic amplification on society. From widespread [riot-inciting misinformation on Facebook](https://www.theverge.com/2020/3/17/21183341/facebook-misinformation-report-nathalie-marechal) to the explosive growth of TikTok - a platform that serves content [entirely on a black-box algorithm](https://www.wired.com/story/tiktok-finally-explains-for-you-algorithm-works/), we've reached a point where [social media algorithms rule how we see the world](https://www.wsj.com/articles/social-media-algorithms-rule-how-we-see-the-world-good-luck-trying-to-stop-them-11610884800) - and it seems like we've lost our individual ability to control these incredibly intricate systems.
From a consumer's perspective, it's difficult to tell what your social media feed prioritizes – sometimes, it shows you content related to products you might have searched the internet for; other times, you might see [eerily accurate friend recommendations](https://www.theverge.com/2017/9/7/16269074/facebook-tinder-messenger-suggestions). If you've watched [The Social Dilemma](https://www.thesocialdilemma.com), you might think that your Facebook feed is managed directly by Mark Zuckerberg & his three dials: engagement, growth, and revenue
The bottom line: we need significant innovation around the algorithms that power our digital lives.
## Feeds: an Open-Sourced App Store for Algorithmic Choice
On Feeds, you're in control over what information is prioritized. You're no longer bound to a hyper-personalized engine designed to maximize your engagement: instead, you have the ability to set your own utility function & design your own feed.
## How we built it
We built Feeds on a React Native frontend & serverless Google Cloud Functions backend! Our app pulls data live from Twitter using [Twint](https://pypi.org/project/twint/) (an open-source Twitter OSINT tool). To prototype our algorithms, we employed a variety of techniques to prioritize different emotions & content –
* "Positivity" - optimized for positive & optimistic content (powered by [OpenAI](http://openai.com))
* "Virality" - optimized for viral content (powered by Twint)
* "Controversy" - optimized for controversial content (powered by [Textblob/NLTK](https://textblob.readthedocs.io/en/dev/))
* "Verified" - optimized for high-quality & verified content
* "Learning" - optimized for educational content
Additionally, to add to the ability to break out of your own echo chamber, we added a feature that puts you into the social media feed of influencers – so if you want to see exactly what Elon Musk or Vice President Kamala Harris sees on Twitter, you can switch to those Feeds with just a tap!
## Challenges we ran into
Twitter's hardly a developer-friendly platform - scraping Tweets to use for our prototype was probably one of our most challenging tasks! We also ran into many algorithmic design choices (e.g. how to detect "controversy") - and drew inspiration from a variety of resource papers & open-source projects.
## Accomplishments that we're proud of
We built a functioning full-stack product over the course of ~10 hours - and we truly believe this emphasis on algorithmic choice is one critical component to the future of social media!
## What we learned
We learned a lot about natural language processing & the different challenges when it comes to designing algorithms using cutting-edge tools like GPT-3!
## What's next for Feeds
We'd love to turn this into an open-sourced platform that plugs into different content sources -- and allows anyone (any developer) to create a custom Feed & share it with the world!
|
winning
|
## Inspiration
Designing an app that all of our friends can enjoy! Music is best enjoyed with people but only one person has control. Our application aims to create awesome memories and get the people going!
## What it does
Spartyfy is a crowd engaging party app that allows everyone to suggest songs. Using votes, the people who consistently pick awesome tunes get more control of the speaker.
## How we built it
React, Node/Express, Azure, Spotify API
## Challenges we ran into
One of the challenges we had were making our UI look good on both mobile and desktop, an essential but difficult to implement feature for our application. Another difficult task was effectively using the spotify API without a dedicated Node client, especially authorization.
## Accomplishments that we're proud of
Our fourth team member Jason was participating in his first software hack. He was able to put in some awesome commits and learn something new, hats off to him. Because of great contributions by everyone, we were able get the project done on time.
## What we learned
When racing against the clock, sometimes you need to make a compromise between the most elegant solution for a quick-and-dirty implementation! The most important thing for a hackathon team is to put their heads together and brainstorm.
## What's next for Spartyfy
The next step is to integrate authentication and individual accounts for everyone who uses the application. Our goal is for people to carry Spartyfy in their pocket wherever they go so they can have a great time, and discover new music.
|
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :)
|
# Links
Youtube: <https://youtu.be/VVfNrY3ot7Y>
Vimeo: <https://vimeo.com/506690155>
# Soundtrack
Emotions and music meet to give a unique listening experience where the songs change to match your mood in real time.
## Inspiration
The last few months haven't been easy for any of us. We're isolated and getting stuck in the same routines. We wanted to build something that would add some excitement and fun back to life, and help people's mental health along the way.
Music is something that universally brings people together and lifts us up, but it's imperfect. We listen to our same favourite songs and it can be hard to find something that fits your mood. You can spend minutes just trying to find a song to listen to.
What if we could simplify the process?
## What it does
Soundtrack changes the music to match people's mood in real time. It introduces them to new songs, automates the song selection process, brings some excitement to people's lives, all in a fun and interactive way.
Music has a powerful effect on our mood. We choose new songs to help steer the user towards being calm or happy, subtly helping their mental health in a relaxed and fun way that people will want to use.
We capture video from the user's webcam, feed it into a model that can predict emotions, generate an appropriate target tag, and use that target tag with Spotify's API to find and play music that fits.
If someone is happy, we play upbeat, "dance-y" music. If they're sad, we play soft instrumental music. If they're angry, we play heavy songs. If they're neutral, we don't change anything.
## How we did it
We used Python with OpenCV and Keras libraries as well as Spotify's API.
1. Authenticate with Spotify and connect to the user's account.
2. Read webcam.
3. Analyze the webcam footage with openCV and a Keras model to recognize the current emotion.
4. If the emotion lasts long enough, send Spotify's search API an appropriate query and add it to the user's queue.
5. Play the next song (with fade out/in).
6. Repeat 2-5.
For the web app component, we used Flask and tried to use Google Cloud Platform with mixed success. The app can be run locally but we're still working out some bugs with hosting it online.
## Challenges we ran into
We tried to host it in a web app and got it running locally with Flask, but had some problems connecting it with Google Cloud Platform.
Making calls to the Spotify API pauses the video. Reducing the calls to the API helped (faster fade in and out between songs).
We tried to recognize a hand gesture to skip a song, but ran into some trouble combining that with other parts of our project, and finding decent models.
## Accomplishments that we're proud of
* Making a fun app with new tools!
* Connecting different pieces in a unique way.
* We got to try out computer vision in a practical way.
## What we learned
How to use the OpenCV and Keras libraries, and how to use Spotify's API.
## What's next for Soundtrack
* Connecting it fully as a web app so that more people can use it
* Allowing for a wider range of emotions
* User customization
* Gesture support
|
partial
|
## Inspiration
We both love karaoke, but there are lots of obstacles:
* going to a physical karaoke is expensive and inconvenient
* youtube karaoke videos not always matches your vocal (key) range, and there is also no playback
* existing karaoke apps have limited songs, not flexible based on your music taste
## What it does
Vioke is a karaoke web-app that supports pitch-changing, on/off vocal switching, and real-time playback, simulating the real karaoke experience. Unlike traditional karaoke machines, Vioke is accesible anytime, anywhere, from your own devices.
## How we built it
**Frontend**
The frontend is built with React, and it handles settings including on/off playback, on/off vocal, and pitch changing.
**Backend**
The backend is built in Python. It leverages source separation ML library to extract instrumental tracks.
It also uses a pitch-shifting library to adjust the key of a song.
## Challenges we ran into
* Playback latency
* Backend library compatibily conflicts
* Integration between frontend and backend
* Lack of GPU / computational power for audio processing
## Accomplishments that we're proud of
* We were able to learn and implement audio processing, an area we did not have experience with before.
* We built a product that can can be used in the future.
* Scrolling lyrics is epic
* It works!!
## What's next for Vioke
* Caching processed audio to eventually create a data source that we can leverage from to reduce processing time.
* Train models for source separation in other languages (we found that the pre-built library mostly just supports English vocals).
* If time and resources allow, we can scale it to a platform where people can share their karaoke playlists and post their covers.
|
## Inspiration
Today Instagram has become a huge platform for social activism and encouraging people to contribute to different causes. I've donated to several of these causes but have always been met with a clunky UI that takes several minutes to fully fill out. With the donation inertia already so high, it makes sense to simplify the process and that's exactly what Activst does.
## What it does
It provides social media users to create a personalized profile of what causes they support and different donation goals. Each cause has a description of the rationale behind the movement and details of where the donation money will be spent. Then the user can specify how much they want to donate and finish the process in one click.
## How we built it
ReactJS, Firebase Hosting, Google Pay, Checkbook API, Google Cloud Functions (Python)
## Challenges we ran into
It's very difficult to facilitate payments directly to donation providers and create a one click process to do so as many of the donation providers require specific information from the donor. Using checkbook's API simplified this process as we could simply send a check to the organization's email. CORS.
## What's next for Activst
Add in full payment integration and find a better way to complete the donation process without needing any user engagement. Launch, beta test, iterate, repeat. The goal is to have instagram users have an activst url in their instagram bio.
|
## Inspiration
We wanted to solve a problem that was real for us, something that we could get value out of. We decided upon Vocally as it solves an issue faced by a lot of people during job interviews, presentations, and other occasions which include speaking in a clear and concise manner. The problem was that it takes a long time to record yourself and re-listen to it just to spot any sentence fillers like "um" or "like". We would like to make it easier to display statistics of one's speech.
## What it does
The user clicks the record button and starts speaking. The application first converts speech-to-text using React's built-in speech recognition. After analyzing the results various text processing techniques (e.g. sentiment analysis), it displays feedback.
## How we built it
* First, we needed to see how keywords could be extracted from an audio recording in the back-end. We settled with React's speech-to-text feature.
* Next, we created API endpoints in Flask (a python web framework) for the React app to make requests from.
* Fuzzy string matching, grammatical, and sentiment analysis were used to process and return the stats to the user using data visualization.
* The last task was deployment to the pythonanywhere.com domain for demo testing purposes.
## Challenges I ran into
Using flask as an API was easy, but we initially tried to host it on GCP, which proved to be difficult as our firewall rules were not configured properly. We moved onto pythonanywhere.com for hosting. For the front-end, we first decided to take a look at the Flutter framework to be able to make the application mobile accessible but the framework was introduced in 2018, and there were a lot of configuration issues that needed to be resolved.
## Accomplishments that we are proud of
Getting the sound recorder to work on the front-end took longer than expected, but the end result was very satisfying. We're proud that we actually achieved creating an end-to-end solution.
## What I learned
Exploring different framework options like Flutter, in the beginning, was a journey for us. The API that was created needed to delve deeper into the python programming language. We learned about various syntactical and natural language processing techniques.
## What's next for Vocally
We may re-explore the concept of natural language processing, perhaps build our own algorithm from scratch and do more over a longer time period.
|
winning
|
## Inspiration
We were inspired by the theme of exploration to better explore our communities and the events and new people that we can reach out to.
## What it does
Our web app uses a map where users can drop markers with information about events, sports games, parties, bar nights etc. The goal here is to inform users of nearby events and allow them to connect with others by posting their own events as well.
## How we built it
We built our project in javascript. We utilized the leaflet library for the map, and used express for the backend.
## Challenges we ran into
Leaflet was a library that we had never seen before and it took a long time to get used to using it. Furthermore, integrating it within our project was no easy task.
## Accomplishments that we're proud of
We're proud of creating an interesting project that we're actually passionate about and have plans on continuing work on it in the future. We believe we did a great job creating a complex and unique web app.
## What we learned
We learned a lot, especially about integrating multiple different parts of the project to create the final product. This was not an easy process, but we learned a lot of transferrable knowledge through this process.
## What's next for GoHere
We have plans to add a couple more features to GoHere that we didn't have the time to add within the hackathon. We wish to have a user verification system for making and removing posts along as adding a chat feature.
|
### What was our inspiration?
This idea came to us last week when the garbage truck missed our whole street. We were frustrated, however, nobody called the City because we did not want to go through the trouble of getting in contact with the city via phone. So, for this hackathon, we decided to create a web application that could display the issues people have to the city council.
### What it does?
This web application allows the user to view the map with a layer of pins displaying all of the issues people in their communities have! The more issues there are in a certain area, the higher the number of clusters increase, and the more severe areas have red clusters. If the user wants to add their own issue, they can simply press the new post button and go to a menu where they can fill out their name, issue, and upload a quick picture. This updates on the map instantly.
### How we built it?
We used node.js to make this whole web application, and all the data was stored on a database called Firebase. We made 2 webpages , one for the Map, and one for a New Post. We make the New Post page upload and add onto the present database with their name, description of problem, location (latitude and longitude), and the image. Then, the map reads the data off the database and shows all the pins on the map.
### Challenges we ran into?
The major challenges we ran into was reading and writing to the database. It was very picky with its syntax and the way the data was read/written. However, after hours and hours of researching we were able to learn the skills needed to program and set the database up accordingly. Another challenge we ran into was deciding which platform to make it on, we first decided to use React Native for our development without having any experience developing with it before and we very quickly ran into many obstacles. So then, we switched to a node.js which was a platform we had a little knowledge about. However, because of our expertise with java, we were easily able to grasp the skills needed to create this web application.
### Accomplishments that we're proud of
We are very proud that we got the database to work with our code. It took us over 6 hours to properly set it up and when it worked we were overwhelmed with excitement! We are also pretty proud of the web application itself because it was the first web application we have ever made and it actually turned out really nice!
### What we learned
We actually learned a lot of about web application development during our venture. This was a clear case of learning by doing, and we are pretty proud of that. I (Samandeep) only heard about node.js and last night I blindly started making a node.js web application. We learned crucial things about how a web application interfaces with a database and how they work together to make a seamlessly fluid application!
### What's next for Mapped Ideas
Our goals for this application is to make it into a mobile application available on both App Store and Google Play Store! We want it to make it super easy for the user to use this application and what's easier than an app. We also want to implement a register and login page for users so they can track their issues and see if the city is doing anything about it!
|
## Inspiration
We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution
## What it does
I helps developers find projects to work, and helps project leaders find group members.
By using the data from Github commits, it can determine what kind of projects a person is suitable for.
## How we built it
We decided on building an app for the web, then chose a graphql, react, redux tech stack.
## Challenges we ran into
The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with.
## Accomplishments that we're proud of
We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show.
## What we learned
We learned that using APIs can be challenging in that they give unique challenges.
## What's next for Hackr\_matchr
Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
|
losing
|
## Presentation + Award
See the presentation and awards ceremony here: <https://www.youtube.com/watch?v=jd8-WVqPKKo&t=351s&ab_channel=JoshuaQin>
## Inspiration
Back when we first came to the Yale campus, we were stunned by the architecture and the public works of art. One monument in particular stood out to us - the *Lipstick (Ascending) on Caterpillar Tracks* in the Morse College courtyard, for its oddity and its prominence. We learned from fellow students about the background and history behind the sculpture, as well as more personal experiences on how students used and interacted with the sculpture over time.
One of the great joys of traveling to new places is to learn about the community from locals, information which is often not recorded anywhere else. From monuments to parks to buildings, there are always interesting fixtures in a community with stories behind them that would otherwise go untold. We wanted to create a platform for people to easily discover and share those stories with one another.
## What it does
Our app allows anybody to point their phone camera at an interesting object, snap a picture of it, and learn more about the story behind it. Users also have the ability to browse interesting fixtures in the area around them, add new fixtures and stories by themselves, or modify and add to existing stories with their own information and experiences.
In addition to user-generated content, we also wrote scripts that scraped Wikipedia for geographic location, names, and descriptions of interesting monuments from around the New Haven community. The data we scraped was used both for testing purposes and to serve as initial data for the app, to encourage early adoption.
## How we built it
We used a combination of GPS location data and Google Cloud's image comparison tools to take any image snapped of a fixture and identify in our database what the object is. Our app is able to identify any fixture by first considering all the known fixtures within a fixed radius around the user, and then considering the similarity between known images of those fixtures and the image sent in by the user. Once we have identified the object, we provide a description of the object to the user. Our app also provides endpoints for members of the community to contribute their knowledge by modifying descriptions.
Our client application is a PWA written in React, which allows us to quickly deploy a lightweight and mobile-friendly app on as many devices as possible. Our server is written in Flask and Python, and we use Redis for our data store.
We used GitHub for source control and collaboration and organized our project by breaking it into three layers and providing each their separate repository in a GitHub organization. We used GitHub projects and issues to keep track of our to-dos and assign roles to different members of the team.
## Challenges we ran into
The first challenge that we ran into is that Google Cloud's image comparison tools were designed to recognize products rather than arbitrary images, which still worked well for our purposes but required us to implement workarounds. Because products couldn't be tagged by geographic data and could only be tagged under product categories, we were unable to optimize our image recognition to a specific geographic area, which could pose challenges to scaling. One workaround that we discussed was to implement several regions with overlapping fixtures, so that the image comparisons could be limited to any given user's immediate surroundings.
This was also the first time that many of us had used Flask before, and we had a difficult time choosing an appropriate architecture and structure. As a result, the integration between the frontend, middleware, and AI engine has not been completely finished, although each component is fully functional on its own. In addition, our team faced various technical difficulties throughout the duration of the hackathon.
## Accomplishments that we're proud of
We're proud of completing a fully functional PWA frontend, for effectively scraping 220+ locations from Wikipedia to populate our initial set of data, and for successfully implementing the Google Cloud's image comparison tools to meet our requirements, despite its limitations.
## What we learned
Many of the tools that we worked on in this hackathon were new to the members working on them. We learned a lot about Google Cloud's image recognition tools, progressive web applications, and Flask with Python-based web development.
## What's next for LOCA
We believe that our project is both unique and useful. Our next steps are to finish the integration between our three layers, add authentication and user roles, and implement a Wikipedia-style edit history record in order to keep track of changes over time. We would also want to add features to the app that would reward members of the community for their contributions, to encourage active participants.
|
## Inspiration for Creating sketch-it
Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives.
## What it does
Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time.
## How we built it
On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server.
On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image.
## Challenges we ran into
We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively.
## Accomplishments that we're proud of
Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎
## What we learned
We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future!
## What's next for sketch-it
Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience!
|
## Inspiration
Many people want to find ways to recycle more, make donations, find charities to support, go to local health clinics or non profits like Planned parenthood, support environmental issues but don't know how or where to look. This often means looking up a place to donate clothes in your city, or a place that accepts recycling certain materials like metals, for example. This app solves this problem by giving the location to all such organizations in one place.
## What it does
Includes a map where organizations (incentivized because they want to reach more people) and people can place a pin on the map of established places (i.e. a junkyard or building housing a health clinic) upload or take photos of the place, add comments about it or other places. There are different maps based on interest like Nonprofit map, Donations map, Volunteer map, Health map and a Profile view where pins from all maps can be seen.
Social media is very important to this app. It leverages social media by allowing users to login with facebook and post a comment about a location to their wall. For further discussion fostering social good, there is a section of the app where users can chat about these issues. Was inspired by the app Waze, where users can real-time comment on traffic. Here users can real-time comment on different issues.## Challenges I ran into
## How I built it
Android app written in Java. Used Parse for backend and facebook APIs for login and sharing a post to facebook wall. Used Google maps API to pin to maps.
## Accomplishments that I'm proud of
All of the special features on the map, such as filter by date, shake device for different version of map (i.e. hybrid), creating a chat sections so that users can communicate. Sharing to facebook, as social media sharing is an important part of the app.
## What I learned
Setting up Parse database, creating functions to both take a photo with the app AND upload from existing photo library on phone.
## What's next for Contribute
Monitoring what is commented/posted.
github link includes code to a completely different project, the most recent commit is my project for HackPrinceton
|
winning
|
## Inspiration
We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format.
## What it does
"Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content.
## How we built it
Our project is comprised of many interconnected components, which we detail below:
**Lecture Detection**
Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly.
**Text Summarization**
We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step.
**Post-Processing and Formatting**
Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point.
**Key Concept Suggestions**
To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases.
**Front-End**
The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model.
## Challenges we ran into
1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points.
2. Our NLP model is quite large, which made it difficult to host on cloud platforms
## Accomplishments that we're proud of
1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
2) Working on an unsolved machine learning problem (lecture simplification)
3) Real-time text analysis to determine new elements
## What we learned
1) First time for multiple members using Flask and doing web development
2) First time using Google Cloud Platform API
3) Running deep learning models makes my laptop run very hot
## What's next for Summaread
1) Improve our summarization model through improving data pre-processing techniques and decreasing run time
2) Adding more functionality to generated outlines for better user experience
3) Allowing for users to set parameters regarding how much the lecture is condensed by
|
## What it does
"ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points.
## How We built it
Our project is comprised of many interconnected components, which we detail below:
#### Formatting Engine
To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required.
#### Voice-to-speech
We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed.
#### Topic Analysis
Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification.
#### Image Scraping
Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen.
#### Graph Generation
Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time.
#### Sentence Segmentation
When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline.
#### Text Title-ification
Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title.
#### Text Summarization
When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous.
#### Mobile Clicker
Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets.
#### Internal Socket Communication
In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides.
## Challenges We ran into
* Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis.
* The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop.
## Accomplishments that we're proud of
* Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
* Working on an unsolved machine learning problem (sentence simplification)
* Connecting a mobile device to the laptop browser’s mic using WebSockets
* Real-time text analysis to determine new elements
## What's next for ImpromPPTX
* Predict what the user intends to say next
* Scraping Primary sources to automatically add citations and definitions.
* Improving text summarization with word reordering and synonym analysis.
|
## Inspiration
We were inspired by a problem we encountered often - a lack of good coding practice problems for niche topics that are often not mentioned in documentation and books.
## What it does
Sit Down and Study is a platform designed to help you improve your programming skills through LeetCode-style questions. You can write and run your code directly on the platform, using an integrated code execution engine, and receive instant feedback. The platform also provides on-demand hints and reading materials to help you learn as you go.
## How we built it
We built it using React, ExpressJS, MongoDB, TypeScript, OpenAI API, Monaco Editor, and Judge0. All of this is interconnected through DNS management via Cloudflare and reverse proxies on Caddy server.
## Challenges we ran into
* Getting Judge0 to work for multiple languages
* A state management issue where our useEffect hook kept executing twice each time it rendered
* Prompt engineering to account for different languages and topic difficulty levels
## Accomplishments that we're proud of
* We have a fully functional product!
* We support Java, Python, and JavaScript (Node.js), and have a working online code editor and code judging system using Judge0
* We were able to create a modern interface reminiscent of LeetCode's code editor with our additions
## What we learned
* Working with worker threads, Monaco Editor, and Judge0
* Working with OpenAI's API and prompt crafting
* CORS is a nightmare
## What's next for Sit Down And Study
* Adding question history support using MongoDB for better context and more diverse question generation
* Refine the prompts for better starter code generation, as it is still hit or miss sometimes
* Add features to guide a user in advancing their learning, by using data from what topics they have covered and suggesting next topics
|
winning
|
## What it does
Lil' Learners is a fun new alternative to learning tools for students in grades ranging from kindergarten to early elementary school. Allow for Teachers to create classes for their students and take note of the learning, strengths and weaknesses of their students as well as allowing for teachers and parents to track the progress of students. Students are assigned classes based on what each of their teachers needs them to practice and are presented with a variety(in the future) of interactive and fun games that take the teachers notes and generates questions which would be presented through the form of games. Students gain points based on how many questions they get right while playing, and get incentive to keep playing and in turn studying by allowing them to own virtual islands that they can customize to their liking by buying cosmetic items with the points earned from studying.
## How we built it
Using OAuth and a MongoDB database, Lil' Learners is a Flask based web application that runs on a structural backbone that is the accounts and courses class hierarchy. We created classes and separated all the types of accounts and courses, and created functions that check for duplicate accounts through both username and email and automatically save accounts to the database or courses to teachers and students or even children to their parents upon instantiation. On the front end, Lil' learners makes use of flask, html and css to create a visually appealing and interactive GUI and web interface. Through the use
## Challenges we ran into
Some challenges were making auth0 work with our log in system that we developed, along with one of the biggest setbacks being with 3.js model that we wanted to create to show off the island that each student owns in an interactive and cool looking way, but despite working at it for several hours, the apis and similar documentation for displaying the 3d models in a flask and html environment seemed to be a lost cause.
## Accomplishments that we're proud of
We are super proud of Lil Learners because despite the various different types of softwares and new/old skills that needed to me learned and merged together for it to work, we managed to create something that we could show off and works to convey the proof of concept for our idea
## What we learned:
We learned a lot about the interactions between various different software and how to integrate them together. Through the process of making Lil' learners we had the opportunity to try out the data management and back end development, and general software development skills with MongoDB, OAuth and GoDaddy and learn how they work and interact with other elements in a web application.
## What's next for Lil' Learners
We are hoping to be able to expand Lil learner's capacities further such as finishing up the 3.js models, fully integrating the OAuth with our account systems, launching our web app onto our go daddy domain, creating a larger variety of games and also providing better visualizations for the statistics for students along with better employments of the points and adaptive learning systems.
|
## Inspiration
From our experience renting properties from private landlords, we think the rental experience is broken. Payments and communication are fragmented for both landlords and tenants. As tenants, we have to pay landlords through various payment channels, and that process is even more frustrating if you have roommates. On the other hand, landlords have trouble reconciling payments coming from these several sources.
We wanted to build a rental companion that initially tackles this problem of payments, but extends to saving time and headaches in other aspects of the rental experience. As we are improving convenience for landlords and tenants, we focused solely on a mobile application.
## What it does
* Allows tenants to make payments quickly in less than three clicks
* Chatbot interface that has information about the property's lease and state-specific rental regulation
* Landlords monitor the cash flow of their properties transparently and granularly
## How we built it
* Full stack React Native app
* Convex backend and storage
* Stripe credit card integration
* Python backend for Modal & GPT3 integration
## Challenges we ran into
* Choosing a payment method that is reliable and fast to implement
* Parsing lease agreements and training GPT3 models
* Deploying and running modal.com for the first time
* Ensuring transaction integrity and idempotency on Convex
## Accomplishments that we're proud of
* Shipped chat bot although we didn't plan to
* Pleased about the UI design
## What we learned
* Mobile apps are tough for hackathons
* Payment integrations have become very accessible
## What's next for Domi Rental Companion
* See if we provide value for target customers
|
## Inspiration
Conventional language learning apps like Duolingo don’t offer the ability to have freeform and dynamic conversations. Additionally, finding a language partner can be difficult and costly.
Lingua Franca tackles this head-on by offering intermediate to advanced language learners an immersive, interactive experience.
Although other apps exist that try to do the same thing, their interaction topics are hard-coded, meaning that you encounter yourself in the same dialogue over and over again. By leveraging LLMs, we’re able to ensure that no two experiences are the same!
## What it does
You stumble into a foreign land and must communicate with the townsfolk in order to get by. As you talk with them, you must reply by recording yourself speaking in their language. Aided by LLMs, their responses dynamically change depending on what you say. Additionally, at some points in the conversation, they will give you checkpoints that you must accomplish, which encourages you to talk to other villagers.
After each of your responses, you can also see alternative phrases you could’ve said in response to the villager. Seeing these alternative responses can aid in learning vocabulary, grammar, and can help the user branch outside of their usual go-to phrases in the language they are learning.
Not only can you guide the conversation to whatever topic you’d like to practice, but to keep the user engaged, we’ve also added backstory to the characters in the village. Each time you talk with them, you can learn something more about their relationship with others in the village!
## How we built it
Development was done in Unity3D.
We used Wit.ai to capture and transcribe the user’s recorded responses.
Those transcribed responses were then fed into an LLM from Together.ai, along with extra information to give context and guide the LLM to prompt the user to complete checkpoints. The response from the LLM becomes the villager’s response to the player.
We created the world using assets from Unity Asset store, and the character models are from Mixamo.
## What we learned
Developing in VR was new to all team members, so developing for the Oculus Quest and using Unity3D was a great learning experience.
LLMs aren’t perfect, and working to mitigate poor, harmful, or unproductive responses is difficult. However, we took this challenge seriously while working on this app and carefully tuned our prompts to give the model the context it needed to avoid these situations.
## What's next for Lingua Franca
The next steps for this app include:
Adding more languages
adding audio feedback from the villagers as an addition to text responses
adding new locations, characters, and worlds for more variation in the experience.
|
partial
|
## Inspiration
Gone are the days of practicing public speaking in a mirror. You shouldn’t need an auditorium full of hundreds of people to be able to visualize giving a keynote speech. This app allows people to put themselves in public speaking situations that are difficult to emulate in every day life. We also wanted to give anyone who wants to improve their speech, including those with speech impediments, a safe space to practice and attain feedback.
## What it does
The Queen’s Speech allows users to use Google Cardboard with a virtual reality environment to record and analyze their audience interaction while giving a virtual speech. Using 3D head tracking, we are able to give real time feedback on where the speaker is looking during the speech so that users can improve their interaction with the audience. We also allow the users to play their speech back in order to listen to pace, intonation, and content. We are working on providing immediate feedback on the number of "um"s and "like"s to improve eloquence and clarity of speech.
## How we built it
Incorporating Adobe After Effects and the Unity game engine, we used C# scripting to combine the best of 360 degree imagery and speech feedback.
## Challenges we ran into
Connecting to the Microsoft Project Oxford proved more difficult than expected on our Mac laptops than the typical PC. We couldn't integrate real 360 footage due to lack of Unity support.
## Accomplishments that we're proud of
Being able to provide a 3D like video experience through image sequencing, as well as highlighting user focus points, and expanding user engagement. Hosting on Google Cardboard makes it accessible to more users.
## What's next for The Queen's Speech
Currently working on word analysis to track "Ums" and "Likes" and incorporating Project Oxford, as well as more diverse 3D videos.
|
## Inspiration
The inspiration for this app comes from the recent natural disasters and terror events that have been occurring around the United States and the globe. From our personal experience, when traveling to foreign places, there is always a sense of fear as it is difficult to get information on what is going on and where. We also realized that it is difficult to keep loved ones posted constantly and consistently on your safety status during these trips as well.
## What it does
Safescape intelligently analyzes real-time new's articles and classifies them as a "non-safe" or "safe" event and notifies users in the respective locations if the article is deemed "non-safe." The app provides emergency contact information and a map of escape routes for the location that you are in. The app has a report button that sends a text to local emergency personnel and notifies users in the vicinity that an emergency is happening. In the event of an emergency, the app also allows the users the options to contact loved one's in a quick and easy manner.
## How we built it
Our backend is a Flask server. We used Google Cloud Platform to intelligently analyze news articles. Microsoft search to pull news articles. Wrld Maps to display a map for the users. Sparkpost API to send out a notification to the users affected. UnifyId to authenticate users.
## Challenges we ran into
We ran into several challenges including getting the Unifyid SDK to work as well as working with the API's in general.
## Accomplishments that we're proud of
We're proud of sticking through with our app even when nothing was working and figuring out how to get all the API's to run properly. Also proud of our designs and the functionalities we were able to get working in our project.
## What we learned
We learned a ton this hackathon. From integrating API's to sending notifications, most everything required looking into something that we haven't worked with before.
## What's next for Safescape
There are a ton of functionalities that are half implemented or implemented not as well as we would like. We would love to add real escape routes based on the venue that the user is at and other stretch features such as alerting the police and potentially identifying whether there is danger based on the movement of the individuals around you.
|
## Inspiration
The post-COVID era has increased the number of in-person events and need for public speaking. However, more individuals are anxious to publicly articulate their ideas, whether this be through a presentation for a class, a technical workshop, or preparing for their next interview. It is often difficult for audience members to catch the true intent of the presenter, hence key factors including tone of voice, verbal excitement and engagement, and physical body language can make or break the presentation.
A few weeks ago during our first project meeting, we were responsible for leading the meeting and were overwhelmed with anxiety. Despite knowing the content of the presentation and having done projects for a while, we understood the impact that a single below-par presentation could have. To the audience, you may look unprepared and unprofessional, despite knowing the material and simply being nervous. Regardless of their intentions, this can create a bad taste in the audience's mouths.
As a result, we wanted to create a judgment-free platform to help presenters understand how an audience might perceive their presentation. By creating Speech Master, we provide an opportunity for presenters to practice without facing a real audience while receiving real-time feedback.
## Purpose
Speech Master aims to provide a practice platform for practice presentations with real-time feedback that captures details in regard to your body language and verbal expressions. In addition, presenters can invite real audience members to practice where the audience member will be able to provide real-time feedback that the presenter can use to improve.
While presenting, presentations will be recorded and saved for later reference for them to go back and see various feedback from the ML models as well as live audiences. They are presented with a user-friendly dashboard to cleanly organize their presentations and review for upcoming events.
After each practice presentation, the data is aggregated during the recording and process to generate a final report. The final report includes the most common emotions expressed verbally as well as times when the presenter's physical body language could be improved. The timestamps are also saved to show the presenter when the alerts rose and what might have caused such alerts in the first place with the video playback.
## Tech Stack
We built the web application using [Next.js v14](https://nextjs.org), a React-based framework that seamlessly integrates backend and frontend development. We deployed the application on [Vercel](https://vercel.com), the parent company behind Next.js. We designed the website using [Figma](https://www.figma.com/) and later styled it with [TailwindCSS](https://tailwindcss.com) to streamline the styling allowing developers to put styling directly into the markup without the need for extra files. To maintain code formatting and linting via [Prettier](https://prettier.io/) and [EsLint](https://eslint.org/). These tools were run on every commit by pre-commit hooks configured by [Husky](https://typicode.github.io/husky/).
[Hume AI](https://hume.ai) provides the [Speech Prosody](https://hume.ai/products/speech-prosody-model/) model with a streaming API enabled through native WebSockets allowing us to provide emotional analysis in near real-time to a presenter. The analysis would aid the presenter in depicting the various emotions with regard to tune, rhythm, and timbre.
Google and [Tensorflow](https://www.tensorflow.org) provide the [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:%7E:text=MoveNet%20is%20an%20ultra%20fast,17%20keypoints%20of%20a%20body.) model is a large improvement over the prior [PoseNet](https://medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5) model which allows for real-time pose detection. MoveNet is an ultra-fast and accurate model capable of depicting 17 body points and getting 30+ FPS on modern devices.
To handle authentication, we used [Next Auth](https://next-auth.js.org) to sign in with Google hooked up to a [Prisma Adapter](https://authjs.dev/reference/adapter/prisma) to interface with [CockroachDB](https://www.cockroachlabs.com), allowing us to maintain user sessions across the web app. [Cloudinary](https://cloudinary.com), an image and video management system, was used to store and retrieve videos. [Socket.io](https://socket.io) was used to interface with Websockets to enable the messaging feature to allow audience members to provide feedback to the presenter while simultaneously streaming video and audio. We utilized various services within Git and Github to host our source code, run continuous integration via [Github Actions](https://github.com/shahdivyank/speechmaster/actions), make [pull requests](https://github.com/shahdivyank/speechmaster/pulls), and keep track of [issues](https://github.com/shahdivyank/speechmaster/issues) and [projects](https://github.com/users/shahdivyank/projects/1).
## Challenges
It was our first time working with Hume AI and a streaming API. We had experience with traditional REST APIs which are used for the Hume AI batch API calls, but the streaming API was more advantageous to provide real-time analysis. Instead of an HTTP client such as Axios, it required creating our own WebSockets client and calling the API endpoint from there. It was also a hurdle to capture and save the correct audio format to be able to call the API while also syncing audio with the webcam input.
We also worked with Tensorflow for the first time, an end-to-end machine learning platform. As a result, we faced many hurdles when trying to set up Tensorflow and get it running in a React environment. Most of the documentation uses Python SDKs or vanilla HTML/CSS/JS which were not possible for us. Attempting to convert the vanilla JS to React proved to be more difficult due to the complexities of execution orders and React's useEffect and useState hooks. Eventually, a working solution was found, however, it can still be improved to better its performance and bring fewer bugs.
We originally wanted to use the Youtube API for video management where users would be able to post and retrieve videos from their personal accounts. Next Auth and YouTube did not originally agree in terms of available scopes and permissions, but once resolved, more issues arose. We were unable to find documentation regarding a Node.js SDK and eventually even reached our quota. As a result, we decided to drop YouTube as it did not provide a feasible solution and found Cloudinary.
## Accomplishments
We are proud of being able to incorporate Machine Learning into our applications for a meaningful purpose. We did not want to reinvent the wheel by creating our own models but rather use the existing and incredibly powerful models to create new solutions. Although we did not hit all the milestones that were hoping to achieve, we are still proud of the application that we were able to make in such a short amount of time and be able to deploy the project as well.
Most notably, we are proud of our Hume AI and Tensorflow integrations that took our application to the next level. Those 2 features took the most time, but they were also the most rewarding as in the end, we got to see real-time updates of our emotional and physical states. We are proud of being able to run the application and get feedback in real-time, which gives small cues to the presenter on what to improve without risking distracting the presenter completely.
## What we learned
Each of the developers learned something valuable as each of us worked with a new technology that we did not know previously. Notably, Prisma and its integration with CockroachDB and its ability to make sessions and general usage simple and user-friendly. Interfacing with CockroachDB barely had problems and was a powerful tool to work with.
We also expanded our knowledge with WebSockets, both native and Socket.io. Our prior experience was more rudimentary, but building upon that knowledge showed us new powers that WebSockets have both when used internally with the application and with external APIs and how they can introduce real-time analysis.
## Future of Speech Master
The first step for Speech Master will be to shrink the codebase. Currently, there is tons of potential for components to be created and reused. Structuring the code to be more strict and robust will ensure that when adding new features the codebase will be readable, deployable, and functional. The next priority will be responsiveness, due to the lack of time many components appear strangely on different devices throwing off the UI and potentially making the application unusable.
Once the current codebase is restructured, then we would be able to focus on optimization primarily on the machine learning models and audio/visual. Currently, there are multiple instances of audio and visual that are being used to show webcam footage, stream footage to other viewers, and sent to HumeAI for analysis. By reducing the number of streams, we should expect to see significant performance improvements with which we can upgrade our audio/visual streaming to use something more appropriate and robust.
In terms of new features, Speech Master would benefit greatly from additional forms of audio analysis such as speed and volume. Different presentations and environments require different talking speeds and volumes of speech required. Given some initial parameters, Speech Master should hopefully be able to reflect on those measures. In addition, having transcriptions that can be analyzed for vocabulary and speech, ensuring that appropriate language is used for a given target audience would drastically improve the way a presenter could prepare for a presentation.
|
partial
|
## Inspiration
There were numerous factors that influenced us to develop this concept. The main reason is to assist people in obtaining relevant information from audio. Students, for example, are one group of people who we believe will benefit greatly from this. Students no longer have to be concerned about missing notes during zoom lectures with the help of this bot. Students can simply transcribe it using the bot as long as it is recorded and they have the audio!
## What it does
Running off a backend, the express server connects to the Discord and AssemblyAI API. Upon entering the home route, the user can navigate to the discord bot through an invitation link. They then type in an audio url link to validate, and if the bot validates the url, the user can then transcribe. After asking the bot to transcribe the accepted audio url, users can go back to the website and click on the results link. Then they will wait for a few seconds for the audio to be transcripted completely. If the audio url was invalid, there will be an error template describing what went wrong. If the audio transcription was completed successfully, users can then see the results of the transcription, including the text, number of words, sentiment analysis, and the primary topic of discussion.
## How we built it
We built it using express on replit. We connected the express routes with templating and the discord and AssemblyAI API. We used EJSto serve our HTML files and bootstrap for styling. The libraries that we used were express, Axios, EJS, and Discord.
## Challenges we ran into
* Emojis not rendering
* Working with the Discord.js framework and getting the bot up and running
* Working with AssemblyAI error handling
## Accomplishments that we're proud of
The program works!!
* We found creative/innovative solutions to the problems we faced
## What we learned
* How to work with the discord.js framework
* How to work with AssemblyAi API
## What's next for Transcribe Bot
* Working to transcribe other users' files
|
## Inspiration
We tried to figure out what kept us connected during the pandemic other then the non ending zoom meetings or the occasional time you spend in class together, and fundamentally this all came down to our ability to just speak and once we started thinking about it we couldn't stop
## What it does
We created a web app that displays a sentence that the user can read and using assembly ai’s real time word detection API we stream what the user is reading, while providing feedback on their correctness. Using a Profanity free comprehensive dictionary, we randomized which words are shown to the user to help make each sentence challenging in a different way.
## How we built it
In our design process, we started with the idea. After coming up with our idea we started our research to find the best way to implement the features we wanted to use and after realizing we had access to assembly ai, we knew it was a match made in heaven. Afterwards, we started designing basic functionalities and creating flowcharts to identify possible points of difficulty. After our design process, we started developing our project using html, css, NodeJS, JQuery and assembly ai.
## Challenges we ran into
We initially hoped to use python as our main language, however, learning Django while also finding ways to provide accurate feedback proved to be too difficult within the time frame which further lead us into building our project in JS. Furthermore, learning Node.js and assembly ai was also significantly difficult considering the time frame.
## Accomplishments that we're proud of
Having ran into countless problems with Django Web Kit and Python in the beginning, we decided to switch to a JavaScript base. Now, with only half the remaining time left, we were forced to be creative and work diligently to finish before the deadline. Ultimately, the end product was better than we could have hoped for, and incorporated many completely new concepts to us. It was this ability to problem solve and learn quickly that we are both very proud of ourselves for.
## What we learned
Along the way to finishing our project, some of (far from all) the things we learnt about were: web device interfaces for recording audio, networking and websockets to help communicate with external APIs, audio streams with machine learning, running javascript as a backend, using NodeJS modules, hosting client and server side platforms, and in general, user experience optimization as a whole.
## What's next for TSPeach
We hoped but were unable to include was the increasing of user feedback based on their pronunciations. We initially wanted to analyze and compare each user's pronunciation to a text-to-speech engine, however it was too hard to do in the time frame, so this would be another feature we would love to add. Optimizing our interface with Assembly AI would be our next major goal. Currently, the asynchronous approach to handling responses from Assembly AI uses a single async thread, however having multiple collaborating would be the ultimate goal.
|
## Inspiration
Large Language Models (LLMs) are limited by a token cap, making it difficult for them to process large contexts, such as entire codebases. We wanted to overcome this limitation and provide a solution that enables LLMs to handle extensive projects more efficiently.
## What it does
LLM Pro Max intelligently breaks a codebase into manageable chunks and feeds only the relevant information to the LLM, ensuring token efficiency and improved response accuracy. It also provides an interactive dependency graph that visualizes the relationships between different parts of the codebase, making it easier to understand complex dependencies.
## How we built it
Our landing page and chatbot interface were developed using React. We used Python and Pyvis to create an interactive visualization graph, while FastAPI powered the backend for dependency graph content. We've added third-party authentication using the GitHub Social Identity Provider on Auth0. We set up our project's backend using Convex and also added a Convex database to store the chats. We implemented Chroma for vector embeddings of GitHub codebases, leveraging advanced Retrieval-Augmented Generation (RAG) techniques, including query expansion and re-ranking. This enhanced the Cohere-powered chatbot’s ability to respond with high accuracy by focusing on relevant sections of the codebase.
## Challenges we ran into
We faced a learning curve with vector embedding codebases and applying new RAG techniques. Integrating all the components—especially since different team members worked on separate parts—posed a challenge when connecting everything at the end.
## Accomplishments that we're proud of
We successfully created a fully functional repo agent capable of retrieving and presenting highly relevant and accurate information from GitHub repositories. This feat was made possible through RAG techniques, surpassing the limits of current chatbots restricted by character context.
## What we learned
We deepened our understanding of vector embedding, enhanced our skills with RAG techniques, and gained valuable experience in team collaboration and merging diverse components into a cohesive product.
## What's next for LLM Pro Max
We aim to improve the user interface and refine the chatbot’s interactions, making the experience even smoother and more visually appealing. (Please Fund Us)
|
losing
|
## The ultimate visualization method with your Roomba
With the ever-increasing proliferation of data comes the need to understand and visualize it in new and intuitive ways. Wolfram's Mathematica allows mathematicians and physicists to calculate and plot their data with functions like Plot and Plot3D. Now with PlotRoomba we are able to see our data on a macroscopic and easily accessible scale.
Using the Wolfram Development Interface, we were able to create a `PlotRoomba` function which takes in a function for plotting (`PolarPlotRoomba` is available for plotting polar functions). Upon execution, Mathematica will plot the function internally as well as post the graph data to our endpoint. Our internet-connected Roomba will read the function data and subsequently follow the path of the curve.
|
## Inspiration
Some things can only be understood through experience, and Virtual Reality is the perfect medium for providing new experiences. VR allows for complete control over vision, hearing, and perception in a virtual world, allowing our team to effectively alter the senses of immersed users. We wanted to manipulate vision and hearing in order to allow players to view life from the perspective of those with various disorders such as colorblindness, prosopagnosia, deafness, and other conditions that are difficult to accurately simulate in the real world. Our goal is to educate and expose users to the various names, effects, and natures of conditions that are difficult to fully comprehend without first-hand experience. Doing so can allow individuals to empathize with and learn from various different disorders.
## What it does
Sensory is an HTC Vive Virtual Reality experience that allows users to experiment with different disorders from Visual, Cognitive, or Auditory disorder categories. Upon selecting a specific impairment, the user is subjected to what someone with that disorder may experience, and can view more information on the disorder. Some examples include achromatopsia, a rare form of complete colorblindness, and prosopagnosia, the inability to recognize faces. Users can combine these effects, view their surroundings from new perspectives, and educate themselves on how various disorders work.
## How we built it
We built Sensory using the Unity Game Engine, the C# Programming Language, and the HTC Vive. We imported a rare few models from the Unity Asset Store (All free!)
## Challenges we ran into
We chose this project because we hadn't experimented much with visual and audio effects in Unity and in VR before. Our team has done tons of VR, but never really dealt with any camera effects or postprocessing. As a result, there are many paths we attempted that ultimately led to failure (and lots of wasted time). For example, we wanted to make it so that users could only hear out of one ear - but after enough searching, we discovered it's very difficult to do this in Unity, and would've been much easier in a custom engine. As a result, we explored many aspects of Unity we'd never previously encountered in an attempt to change lots of effects.
## What's next for Sensory
There's still many more disorders we want to implement, and many categories we could potentially add. We envision this people a central hub for users, doctors, professionals, or patients to experience different disorders. Right now, it's primarily a tool for experimentation, but in the future it could be used for empathy, awareness, education and health.
|
## Inspiration
There are many data analysis tools available (Tableau, Excel, etc.) but these products take time to learn. We imagine a natural and rich conversational experience where students can simply converse with a bot to find meaning in data and observe real-time visualizations in augmented reality.
## What it does
Our hack visualizes data through voice commands. A user can say things such as "plot the data", "find the line of best fit", or "find the max", and these actions will be completed on the 3D plot. The 3D plot is visualized through augmented reality.
## How I built it
We built the application using Unity and DialogFlow. Unity was used for producing the plot in augmented reality. Using the AR platform Vuforia, we read in data from a CSV file, and generated spherical game objects ("dots") to represent the data points. We then generated a cylindrical game object for the best fit line.
We used DialogFlow to build natural and rich conversational experiences for the student, our user base, to interact with the plot. Dialogflow incorporates Google's machine learning expertise to take phrases such as "find best fit" and expand to understand variations on this phrase. We connected this to Unity using websockets, and plot the points or compute the best fit based on what phrase is detected.
## Challenges I ran into
We ran into issues with plotting a line of best fit in Unity and with connecting DialogFlow to the Unity gaming platform. When plotting the line of best fit in Unity, most of the methods available in C# were only for 2D plotting. Thus, we ended up using line of best fit equations in order to get the correct equation.
When connecting DialogFlow to Unity gaming platform, we spent the entire afternoon determining how to integrate the two platforms. To fix this issue, we modified a deprecated library that corrected the communication issues.
## Accomplishments that I'm proud of
We're proud of full integration between Google Cloud DialogFlow and the Unity gaming platform and visualizing data points in Unity.
## What I learned
We learned more about DiagramFlow and how to connect to Unity. We also gained further experience with Unity and the Vuforia platform.
## What's next for Connect DOT-AR (Data On-demand Teller in Augmented Reality)
Next steps for DOT-AR would be to
1.) Allow users to enter different data sets,
2.) Expand upon available commands for the data (extrapolate data, best fit lines other than linear, etc.)
3.) Add additional phrases to DiagramFlow to make speech understanding more robust
|
winning
|
## Inspiration
In 2017, one of our team members bunkered down through Hurricane Harvey, which devastated the city of Houston. Many low-lying areas were flooded, stranding families inside their homes and cutting off supply lines and infrastructure. However, the resulting community response was astonishing - thousands of boat-owning residents took it upon themselves to rescue their less fortunate neighbors by ferrying people out of dangerous waters and bringing supplies, compensating for the city police and fire departments that were severely backlogged with many other tasks.
To facilitate their efforts, the volunteer rescuers used social media and Google Maps, building impromptu pages to communicate with each other and identify hazardous areas for rescue. Our idea is to combine all of the disparate command centers that saved Houston during Hurricane Harvey into a single app so rescuers and even victims are prepared ahead of time. The location, status, and situational needs of every user will all be viewable on a live map, broadcasting vital information to everyone in the disaster zone and strengthening the bonds of local communities during difficult times.
## What it does
Beacon is primarily a location-tracking app, broadcasting the user’s location as a dot on a map to other users whenever the app is enabled. Users can choose between 5 different settings for their status - emergency (requesting aid ASAP), help (requesting non-urgent aid), neutral (the default setting), safe (no aid necessary), and rescuer (offering aid). Every user’s dot will be assigned a certain color depending on their status, and additional comments describing specific types of aid or situations can be viewed by rescuers as a pop-up on the map. By tracking location, rescuers can easily identify swathes of rescue locations and find victims, and victims can determine where the nearest rescuer to their position is.
## How we built it
Beacon is an iOS app built in Xcode 10.3 using the Swift 4 programming language, with MapKit tools to support the location-tracking features.
## Challenges we ran into
When we initially brainstormed the idea for a disaster response app, location tracking was the most daunting issue since we had no idea how to pull location data from devices and implement it. We used MapKit to track our own location, but it was too difficult to set up our app to track other client devices as well, so for the purposes of the demonstration, we decided to simulate that data instead.
Early Saturday morning, we ran into a massive merge fiasco when 3 of our team members tried to improve the functionality of the user’s status colors all at the same time. After painfully moving back to a previous version and redownloading all of our code, we learned to coordinate our git commands.
## Accomplishments that we're proud of
Coming together as a team and producing our planned idea is one of our most significant accomplishments this weekend - we are a team of 4 college freshmen and first-time hackers, so we’re completely new to everything. In just 36 hours, we figured out how to pool our thoughts together, generate a unique idea, determine our constraints, and relentlessly debug to hack the final product together.
## What we learned
We discovered how to take full advantage of version control with Git, as we broke our app many times during development and became very well acquainted with reverting back to our last working version.
Additionally, we learned how to designate tasks between back- and front-end development and link them together to construct the app. For example, we split button creation tasks among team members by delegating one person to construct the button UI, while another linked buttons to Beacon’s functionalities.
## What's next for Beacon
Adding communication features would significantly improve the utility of Beacon if rescuers and victims could quickly share information with each other within the disaster zone. A potential next step could be to partner Beacon’s functionality with the Zello app, a communications app that simulates walkie-talkies and can run on low signal, which was also used by rescuers during Hurricane Harvey.
|
## Inspiration
-- Two of our team members from Florida have experienced the impact of Hurricane Irma firsthand during the past week even before it made landfall: barren shelves in the grocery store, empty fuel pumps, and miles upon miles of traffic due to people evacuating. Even amidst the chaos and fear, there are stories of people performing altruistic acts to help one another. One Facebook post recounted the story of a woman going to the store in search of a generator for her father who relies on a ventilator. There were no generators left at any store in town, so a generous person who overheard her situation offered her their generator. If an app were able to connect people who were able to offer assistance with people in need, many more beautiful stories like this could exist.
## What it does
-- Our app brings together communities to promote cooperation in both the preparation for and the aftermath of hurricanes and other natural disasters. Users are able to offer or request shelter, assistance, supplies, or rides. Others may view these offers or requests and respond to the original poster. Users may find important information, such as evacuation and flood warnings and the contact information of local authorities. Additionally, local authorities can utilize this app to plan their route more effectively and provide the fastest and most efficient care for those who need it the most.
## How we built it
-- We built the app in Expo using React Native and React Navigation. We chose Firebase for our database because of it's reliability and sheer ability to scale for an influx of users - prevalent in circumstances such as a natural disaster. Additionally, Firebase provides real time updates so that people can offer or receive help as soon as possible, saving more lives and ensuring the safety of the people in our communities. We also used React MapView to provide a visual for the areas affected.
## Challenges we ran into
-- None of our team knew anything about Javascript, React, Expo, or Firebase before we the project.
Despite encountering countless roadblocks, we took advantage of PennApp's resources such as mentors, hackpacks, workshops, and students to help us through the difficult, but also very rewarding times.
## Accomplishments that we're proud of
-- Being able to tackle a real life problem that is affecting countless of lives in front of our eyes inspired and motivated us, to not only become more empathetic to those around us but to unite and help out our community.
## What we learned
-- We learned how to use cutting edge technology such as React Native and Firebase to rapidly prototype a solution that has the potential to save many lives and empower our community in less than 36 hours. We also learned how quickly it can be to help others when we free ourselves from our differences and work together.
## What's next for Crisis Connect
-- Next, we will improve the user interface to become more friendly for all of our users.
If time allows, we will also let people with disabilities and/or health concerns have priority within the app and introduce a chatbot to let the users have an easier time looking for the information they need.
Additionally, we would like to add a feature that allows users to report major damages, shortages, or traffic jams to keep up with the disasters. We understand that during natural disasters, internet may not always be available, but mobile networks are usually still available with a slower, 2g connection. As a result, we hope to utilize a text/chatbot to effectively communicate with those who are stranded or require immediate attention.
|
## 💡 Inspiration
You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?”
If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies.
We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit.
## 🔍 What it does
First, our AI-powered summarization engine creates a set of live notes based on the current lecture.
Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos!
Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind.
## ⭐ Feature List
* Dashboard with all your notes
* Summarizes your lectures automatically
* Select/Highlight text from your online lectures
* Organize your notes with intuitive UI
* Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime
* Text simplification, definitions, and synonyms anywhere on the web
* DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster.
## ⚙️ Our Tech Stack
* Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API,
* Web Application: Chakra UI + React.js, Next.js, Vercel
* Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js
* Infrastructure: Firebase/Firestore
## 🚧 Challenges we ran into
* Completing our project within the time constraint
* There was many APIs to integrate, making us spend a lot of time debugging
* Working with Google Chrome Extension, which we had never worked with before.
## ✔️ Accomplishments that we're proud of
* Learning how to work with Google Chrome Extensions, which was an entirely new concept for us.
* Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use.
## 📚 What we learned
* The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out!
* Working on a project where you can relate helps a lot with motivation
* Chakra UI is legendary and a lifesaver
* The Chrome Extension API is very difficult, did we mention that already?
## 🔭 What's next for AcadeME?
* Implementing a language translation toggle to help international students
* Note Encryption
* Note Sharing Links
* A Distributive Quiz mode, for online users!
|
losing
|
## Inspiration
Our team wanted to make a smart power bar device to tackle the challenge of phantom power consumption. Phantom power is the power consumed by devices when they are plugged in and idle, accounting for approximately 10% of a home’s power consumption. [1] The best solution for this so far has been for users to unplug their devices after use. However, this method is extremely inconvenient for the consumer as there can be innumerable household devices that require being unplugged, such as charging devices for phones, laptops, vacuums, as well as TV’s, monitors, and kitchen appliances. [2] We wanted to make a device that optimized convenience for the user while increasing electrical savings and reducing energy consumption.
## What It Does
The device monitors power consumption and based on continual readings automatically shuts off power to idle devices. In addition to reducing phantom power consumption, the smart power bar monitors real-time energy consumption and provides graphical analytics to the user through MongoDB. The user is sent weekly power consumption update-emails, and notifications whenever the power is shut off to the smart power bar. It also has built-in safety features, to automatically cut power when devices draw a dangerous amount of current, or a manual emergency shut off button should the user determine their power consumption is too high.
## How We Built It
We developed a device using an alternating current sensor wired in series with the hot terminal of a power cable. The sensor converts AC current readings into 5V logic that can be read by an Arduino to measure both effective current and voltage. In addition, a relay is also wired in series with the hot terminal, which can be controlled by the Arduino’s 5V logic. This allows for both the automatic and manual control of the circuit, to automatically control power consumption based on predefined thresholds, or to turn on or off the circuit if the user believes the power consumption to be too high. In addition to the product’s controls, the Arduino microcontroller is connected to the Qualcomm 410C DragonBoard, where we used Python to push data sensor data to MongoDB, which updates trends in real-time for the user to see. In addition, we also send the user email updates through Python with the time-stamps based on when the power bar is shut off. This adds an extended layer of user engagement and notification to ensure they are aware of the system’s status at critical events.
## Challenges We Ran Into
One of our major struggles was with operating and connecting the DragonBoard, such as setting up connection and recognition of the monitor to be able to program and install packages on the DragonBoard. In addition, connecting to the shell was difficult, as well as any interfacing in general with peripherals was difficult and not necessarily straightforward, though we did find solutions to all of our problems.
We struggled with establishing a two-way connection between the Arduino and the DragonBoard, due to the Arduino microntrontroller shield that was supplied with the kit. Due to unknown hardware or communication problems between the Arduino shield and DragonBoard, the DragonBoard would continually shut off, making troubleshooting and integration between the hardware and software impossible.
Another challenge was tuning and compensating for error in the AC sensor module, as due to lack of access to a multimeter or an oscilloscope for most of our build, it was difficult to pinpoint exactly what the characteristic of the AC current sinusoids we were measuring. For context, we measured the current draw of 2-prong devices such as our phone and laptop chargers. Therefore, a further complication to accurately measure the AC current draws of our devices would have been to cut open our charging cables, which was out of the question considering they are our important personal devices.
## Accomplishments That We Are Proud Of
We are particularly proud of our ability to have found and successfully used sensors to quantify power consumption in our electrical devices. Coming into the competition as a team of mostly strangers, we cycled through different ideas ahead of the Makeathon that we would like to pursue, and 1 of them happened to be how to reduce wasteful power consumption in consumer homes. Finally meeting on the day of, we realized we wanted to pursue the idea, but unfortunately had none of the necessary equipment, such as AC current sensors, available. With some resourcefulness and quick-calling to stores in Toronto, we were luckily able to find components at the local electronics stores, such as Creatron and the Home Hardware, to find the components we needed to make the project we wanted.
In a short period of time, we were able to leverage the use of MongoDB to create an HMI for the user, and also read values from the microcontroller into the database and trend the values.
In addition, we were proud of our research into understanding the operation of the AC current sensor modules and then applying the theory behind AC to DC current and voltage conversion to approximate sensor readings to calculate apparent power generation. In theory the physics are very straightforward, however in practice, troubleshooting and accounting for noise and error in the sensor readings can be confusing!
## What's Next for SmartBar
We would build a more precise and accurate analytics system with an extended and extensible user interface for practical everyday use. This could include real-time cost projections for user billing cycles and power use on top of raw consumption data. As well, this also includes developing our system with more accurate and higher resolution sensors to ensure our readings are as accurate as possible. This would include extended research and development using more sophisticated testing equipment such as power supplies and oscilloscopes to accurately measure and record AC current draw. Not to mention, developing a standardized suite of sensors to offer consumers, to account for different types of appliances that require different size sensors, ranging from washing machines and dryers, to ovens and kettles and other smaller electronic or kitchen devices. Furthermore, we would use additional testing to characterize maximum and minimum thresholds for different types of devices, or more simply stated recording when the devices were actually being useful as opposed to idle, to prompt the user with recommendations for when their devices could be automatically shut off to save power. That would make the device truly customizable for different consumer needs, for different devices.
## Sources
[1] <https://www.hydroone.com/saving-money-and-energy/residential/tips-and-tools/phantom-power>
[2] <http://www.hydroquebec.com/residential/energy-wise/electronics/phantom-power.html>
|
## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything.
|
# Inspiration
Every year, 3,000 people pass away from distracted driving. And every year, it's the leading cause of car accidents. However, this is a problem that can be solved by a transition towards ambient and touchless computing.
From reducing distracted driving to having implications for in-home usage (for those unable to adjust lighting, for instance), having ambient and touchless computing entails major impacts on the future. Being able to simply raise fingers to adjust car hardware, such as the speed of the AC fan, the intensity of the lights in the car, or even in homes for those unable to reach or utilize household appliances such as light switches, has implications beyond driving. We hope ambi. will be applicable in increasing safety and effectiveness in the future.
# What it does
The ambi. app, downloadable on mobile, provides a guide corresponding to hardware settings with the number of fingers held in front of the camera—integrated into ambi. with computer vision to track hand movements. When the driver opens the app, they are presented with the option to raise one finger to adjust lighting, two fingers for the AC fan, three fingers for the radio volume, and 4 fingers for the radio station. From there, they can choose to adjust the specific hardware based on what they find (1 finger for on, 2 for off, 3 for increase, 4 for decrease). This helps to reduce distracted driving by keeping their hands on the wheel while driving.
# How we built it
We had four main components that were integrated into this project: hardware, firmware, backend, and frontend. The hardware represents the physical functionalities of the car (e.g. lights, fan, speaker). In our demonstration, we simulated the lights and the fan of a car.
We used hardware to control the peripherals of the car such as the fan and the LED strip lights (Neopixel). For the fan, we used a transistor-driver circuit and pulse width modulation from the Arduino UNO to vary the duty cycle of the input wave and hence change the speed of the fan. Two resistors were attached to the gate of the power transistor to ensure: one to drive the GPIO and the other to ensure that it was not floating when there was no voltage present at it. A diode was also attached between the drain and source in case the fan-generated back EMF. A regulator (78L05) was used to supply voltage and current to the LED since it needed a lower voltage supply but a higher current. This was easier to program as it didn’t require PWM. The Neopixel library was used to control the brightness of the LEDs, their color, etc. A radio module, nRF24L01+, was used to communicate between the first Arduino UNO connected to the peripherals and the second Arduino UNO connected to the laptop running the computer vision python script and the backend. The communication over the radio was done using a library and a single integer was sent that encoded both the device that was chosen as well as its control. More specifically, this was the encoding used - 1: light, 2: fan then 1: on, 2: off, 3: increase, 4: decrease.
We used firmware to change the physical state of the hardware by analyzing the motions of a hand using computer vision and then changing the physical features of the car to match the corresponding hand motions. The firmware was built in Python scripts, using the libraries of mediapipe, opencv, and numpy. A camera (from the user’s phone) that is mounted next to the steering wheel, tracks the motion of the user’s hand. If it detects some fingers that are being held up by the user (from 1 to 4 fingers) for over 2 seconds, it will record the number of fingers, that corresponds to a certain device (e.g. lights). Then, the camera will continue to record the user as they hold up different numbers of fingers. One finger corresponds to turning on the device, two fingers correspond to turning off the device, three fingers corresponds to increasing the device (e.g. increasing brightness), and four fingers corresponds to decreasing the device (e.g. decreasing brightness). Then, if the user holds up no fingers for an extended amount of time, the system will alert the user and revert back to waiting for the user to input another device.
Third, we used a backend Python script to integrate the data received and transmitted to the firmware and computer vision with the data received and transmitted to the frontend Frontend Swift App. The backend Python script would take in data from the Frontend Swift App that indicates what each number of fingers corresponds to which specific task. It will communicate that with the firmware, calling functions from the firmware library to start each of the different functions. For example, the backend Python script will call a function in the firmware library to wait until a device is selected, and then after this device is selected, to perform various functionalities. The speech is also configured in this script to indicate to the user what is currently being done.
Finally, the frontend of ambi. is built using SwiftUI and will be integrated on a user’s phone. The app will present the user with a guide corresponding to the number of fingers with hardware, as well as its specific adjustment, such as which fingers correspond to toggling on and off or increasing and decreasing a certain physical component of the car. This app will demonstrate what the users can control with the touchless computer, as well as generate discrete variables that can automatically toggle a specific state, such as a specific speed of the fan or turning a light completely off.
# Challenges we ran into
Throughout the process, we found it difficult to integrate the hardware with the software. Each member of the team worked on a specialized part of the project, from hardware to firmware to frontend UI/UX to backend. Bringing each piece together, and especially the computer vision with the camera set up on the ambi. app proved to be quite difficult. However, teamwork makes the dream work and we were able to get it done, especially since each of us focused on a specific part (i.e. one teammate worked on frontend, while another on firmware, and so on).
Here are some specific challenges we faced:
Downloading the libraries and configuring the paths - you may be surprised about how tricky this is
Ensuring that the computer vision algorithm had a high accuracy and wouldn't detect unwanted movements or gestures
Integrating the backend with the firmware Python script
Integrating the hardware (using Arduino IDE) with the firmware Python script
Learning Swift within a day and hence, building a functional frontend
Debugging hardware when PWM or on/off functionalities were going awry - this was resolved through a more careful understanding of the libraries that we were using
Adding the speech command as another feature of our Python script and backend
Accomplishments that we're proud of
We created a touchless computer that involved several integrations from hardware to front-end development. We demonstrated the capabilities of changing volume or fan speed in our hardware by using computer vision to track specific hand motions. This was integrated with a Python backend that was interfaced with a frontend app built in Swift.
# What we learned
During this process, we learned how to build a Restful API, mobile applications, techniques to interface between software and hardware, computer vision, and establish product-market fit. We also learned that hacking is not just about creating something new, but integrating several components together to create a product that creates a meaningful impact on society, while working together on a team.
We also learned what teamwork in a development project looks like. Often a task reaches a point where it cannot be split between developers, and given the limited time, this limited the scope of what we could code in such a short amount of time. However, we benefited from acknowledging this for the smooth development process. Moreover, since each member often had a completely different section that they worked in, we learned to integrate each vertical of the final project (such as firmware or frontend) with the other components using APIs.
# What's next for ambi.
Ambi.’s technology is hacked together currently. However, the first step would be more seamlessly integrating the frontend to the iPhone camera that acts as a sensor for movement. There is a lack of libraries to launch videos from a swift application, which means ambi. Will create another library for itself. We want to focus specifically on Site Reliability Engineering and creating a lighter tech stack to reduce latency as these drastically improve user adoption and retention.
Next, ambi. needs to connect to an actual car API and be able to manipulate some of its hardware devices. Teslas and other tech-forward cars are likely strong markets, as they have companion apps and digital ecosystems with native internet connections, increasing the seamless quality that we want ambi. to deliver.
Ambient computing has numerous applications with IoT and the digitization of non-digital interfaces (e.g. any embedded system operated by buttons instead of generalized input-output devices). We plan to consider applications for Google Nest, integrating geonets to sense when to begin touchless computing as well as kitchen appliance augmentations.
|
winning
|
## 💡 Inspiration💡
According to the City of Toronto, "contaminated recycling is currently costing the City millions annually. Approximately one third of what is put in the Blue Bin doesn’t belong there or was ruined as a result of the wrong items being put into the bin." Missing out on being able to recycle one third of what is put in the Blue Bin is huge, especially when this issue can be solved by spreading more actionable awareness to the city's residents. Furthermore, Toronto cannot be the only place in the world that is having these issues. Thus, it becomes increasingly important for us to become mindful citizens of the world we inhabit, for the benefit of our communities and most importantly, for the wildlife and world around us!
## ? What is PlanetPal ?
PlanetPal is a gamified recycling app that is designed to promote good recycling habits and spread more awareness about recycling correctly. Users subscribe to the app monthly, paying upwards of $10 per month, where every time they recycle, they build progress to recovering the money put towards their description. Everytime the user recyles something, they earn Green tokens (our exclusive currency), which can be redeemed for real world money. Furthermore, completing monthly challenges and consistently recycling awards users with a limited monthly challenge badge that displays their dedication to the environment. By collecting these badges, users will have the opportunity to earn even more tokens! Users are given recycling instructions when they take a picture of their trash, their trash is classified with a CNN into 6 categories. The user is then told how to recycle the item that they are holding. After disposing of the item, the user gains progress towards the monthly challenge, as well as tokens.
## 🔧 How we built it 🔧
Our front end mobile application is developed with React Native and Expo, using packages such as React Native Paper to speed up the development process. Additionally, we used React Navigation for smooth UI transitions, and Expo Camera to take photos.
Our back end is built with Python and Flask, which hosts our CNN that classifies images of trash into 6 different categories. We manage the player's progress, badges, and in-game currency, as well as classify the images passed from the front end mobile app. Moreover, the logic behind generating advice on proper recycling lies here, where we passed the classified object into Cogenerate's command-nightly generative AI model.
Our machine learning model is a CNN transfer learning model built on VGG19, a model trained on ImageNet. We chose to build on VGG19 in the interest of time, while guaranteeing relatively high accuracy. We used a Kaggle dataset, Garbage Classification, to train the VGG19 model and fine tune it. Our model classifies images correctly at around 82.31%!

## 🤔 Challenges we ran into 🤔
Since this was our first time ever training a machine learning model, we initially decided to train our model directly on Kaggle, where we had access to cloud GPUs. Unfortunately, we realized that we could not actually download our model! We also ran into issues in terms of figuring out how to implement our machine learning model into our backend code so that we could actually run it to classify new images that it has not seen before. Since we did not have unlimited processing power, training the model also took significant time.
For some of our team members, this project marked their first experience with React Native and Flask. This added another layer of complexity to the development process as they were learning and adapting to these technologies on the fly.
## 🏆 Accomplishments that we're proud of 🏆
Despite our team's limited experience in mobile app development, we are proud to have successfully created a functional and aesthetically pleasing UI. We are also extremely proud to say that our machine learning model is able to identify recyclable materials to a fairly high percentage of accuracy.
## 🤓 What we learned 🤓
Our team was split into 3 separate roles: frontend, backend and machine learning model. All of our members decided to work with a Framework that they had not used before, or develop something completely brand new. Specifically, one of our members spent many hours researching machine learning models, before being able to implement, and connect our own model to the backend of our project. Other members had the opportunity to experience the mobile app development process. Overall, each member of the team was able to learn something new from this project.
## 👀 What's next for PlanetPal 👀
We are currently looking at ways to incentivize users to consistently recycle. Implementing actual challenges such as recycling a certain amount of materials each month, or a daily streak mechanism would help users stay engaged. In order to improve our application, a database containing usernames, emails, and tokens would help this app be more accessible on multiple platforms. Another aspect we are considering is optimizing the app's interface for various screen sizes, ensuring a seamless user experience whether they're accessing it from a smartphone, tablet, or other mobile devices.
|
## Inspiration
All of our parents like to recycle plastic bottles and cans to make some extra money, but we always thought it was a hassle. After joining this competition and seeing sustainability as one of the prize tracks, we realized it would be interesting to create something that makes the recycling process more engaging and incentivized on a larger scale.
## What it does
We gamify recycling. People can either compete against friends to see who recycles the most, or compete against others for a prize pool given by sponsors (similar to how Kaggle competitions work). To verify if a person recycles, there's a camera section where it uses an object detection model to check if a valid bottle and recycling bin are in sight.
## How we built it
We split the project into 3 major parts. The app itself, the object detection model, and another ML model that predicted how trash in a city would move so users can move with it to pick up the most amount of trash. We implemented an object detection model, where we created our own dataset of cans and bottles at PennApps with pictures around the building, and used Roboflow to create the dataset. Our app was created using Swift, and it was inspired by a previous GitHub that deployed a model of the same type as ours onto IOS. The UI was designed using Figma. The ML model that predicted the movement of trash concentration was a CNN that had a differential equation as a loss function which had better results than just the vanilla loss functions.
## Challenges we ran into
None of us had coded an app before, so it was difficult doing anything with Swift. It actually took us 2 hours just to get things set up and get the build running, so this was for sure the hardest part of the project. We also ran into problems finding good datasets for both of the models, as they were either poor quality or didn't have the aspects that we wanted.
## Accomplishments that we're proud of
Everyone on our team specializes in backend, so with limited initial experience in frontend, we're especially proud of the app we’ve created—it's our first time working on such a project. Integrating all the components posed significant challenges too. Getting everything to work seamlessly, including the CNN model and object detection camera within the same app, required countless attempts. Despite the challenges, we've learned a great amount throughout the process and are incredibly proud of what we've achieved so far.
## What we learned
How to create an IOS app, finding datasets, integrating models into apps.
## What's next for EcoRush
A possible quality change to the app would be to find a way to differentiate bottles from each other so people can't "hack" the system. We are also looking for more ways to incentivize people to recycle litter they see everyday other than with money. After all, our planet would be a whole lot greener if every citizen of Earth does just a small part!
|
Inspiration
We decided to try the Best Civic Hack challenge with YHack & Yale Code4Good -- the collaboration with the New Haven/León Sister City Project. The purpose of this project is to both fundraise money, and raise awareness about the impact of greenhouse gases through technology.
What it does
The Carbon Fund Bot is a Facebook messenger chat agent based on the Yale Community Carbon Fund calculator. It ensues a friendly conversation with the user - estimating the amount of carbon emission from the last trip according to the source and destination of travel as well as the mode of transport used. It serves the purpose to raise money equivalent to the amount of carbon emission - thus donating the same to a worthy organization and raising awareness about the harm to the environment.
How we built it
We built the messenger chatbot with Node.js and Heroku. Firstly, we created a new messenger app from the facebook developers page. We used a facebook webhook for enabling communication between facebook users and the node.js application. To persist user information, we also used MongoDB (mLabs). According to the user's response, an appropriate response was generated. An API was used to calculate the distance between two endpoints (either areial or road distance) and their carbon emission units were computed using it.
Challenges we ran into
There was a steep curve for us learning Node.js and using callbacks in general. We spent a lot of time figuring out how to design the models, and how a user would interact with the system. Natural Language Processing was also a problem.
Accomplishments that we're proud of
We were able to integrate the easy to use and friendly Facebook Messenger through the API with the objective of working towards a social cause through this idea
What's next
Using Api.Ai for better NLP is on the cards. Using the logged journeys of users can be mined and can be used to gain valuable insights into carbon consumption.
|
losing
|
## Inspiration
The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive.
## What it does and how we built it
Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor.
## How we built it
We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves.
## Challenges we ran into
Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data.
## Accomplishments that we're proud of
We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications.
## What we learned
It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing.
## What's next for SpotMe
In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too).
The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete.
For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady.
|
## Inspiration
Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played!
## What it does
AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements.
## How we built it
* The first step was setting up opencv and importing the a body part tracking model from google mediapipe
* Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped.
* Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key
* We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen
* Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play
## Challenges we ran into
A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it
## Accomplishments that we're proud of
We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project.
## What we learned
We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites.
We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles.
We learned how to resize, move, and give priority to external windows such as the video output window
We learned how to run python files from JavaScript to make automated calls to further processes
## What's next for AR.cade
The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft.
|
# Inspiration 🌟
**What is the problem?**
Physical activity early on can drastically increase longevity and productivity for later stages of life. Without finding a dependable routine during your younger years, you may experience physical impairment in the future. 50% of functional decline that occurs in those 30 to 70 years old is due to lack of exercise.
During the peak of the COVID-19 pandemic in Canada, nationwide isolation brought everyone indoors. There was still a vast number of people that managed to work out in their homes, which motivated us to create an application that further encouraged engaging in fitness, using their devices, from the convenience of their homes.
# Webapp Summary 📜
Inspired, our team decided to tackle this idea by creating a web app that helps its users maintain a consistent and disciplined routine.
# What does it do? 💻
*my trAIner* plans to aid you and your journey to healthy fitness by displaying the number of calories you have burned while also counting your reps. It additionally helps to motivate you through words of encouragement. For example, whenever nearing a rep goal, *my trAIner* will use phrases like, “almost there!” or “keep going!” to push you to the last rep. Once completing your set goal *my trAIner* will congratulate you.
We hope that people may utilize this to make the best of their workouts. Utilizing AI technology to help those reach their rep standard and track calories, we believe could help students and adults in the present and future.
# How we built it:🛠
To build this application, we used **JavaScript, CSS,** and **HTML.** To make the body mapping technology, we used a **TensorFlow** library. We mapped out different joints on the body and compared them as they moved, in order to determine when an exercise was completed. We also included features like parallax scrolling and sound effects from DeltaHacks staff.
# Challenges that we ran into 🚫
Learning how to use **TensorFlow**’s pose detection proved to be a challenge, as well as integrating our own artwork into the parallax scrolling. We also had to refine our backend as the library’s detection was shaky at times. Additional challenges included cleanly linking **HTML, JS, and CSS** as well as managing the short amount of time we were given.
# Accomplishments that we’re proud of 🎊
We are proud that we put out a product with great visual aesthetics as well as a refined detection method. We’re also proud that we were able to take a difficult idea and prove to ourselves that we were capable of creating this project in a short amount of time. More than that though, we are most proud that we could make a web app that could help out people trying to be more healthy.
# What we learned 🍎
Not only did we develop our technical skills like web development and AI, but we also learned crucial things about planning, dividing work, and time management. We learned the importance of keeping organized with things like to-do lists and constantly communicating to see what each other’s limitations and abilities were. When challenges arose, we weren't afraid to delve into unknown territories.
# Future plans 📅
Due to time constraints, we were not able to completely actualize our ideas, however, we will continue growing and raising efficiency by giving ourselves more time to work on *my trAIner*. Potential future ideas to incorporate may include constructive form correction, calorie intake calculator, meal preps, goal setting, recommended workouts based on BMI, and much more. We hope to keep on learning and applying newly obtained concepts to *my trAIner*.
|
winning
|
## Inspiration
The idea for SlideForge came from the struggles researchers face when trying to convert complex academic papers into presentations. Many academics spend countless hours preparing slides for conferences, lectures, or public outreach, often sacrificing valuable time they could be using for research. We wanted to create a tool that could automate this process while ensuring that presentations remain professional, audience-friendly, and adaptable to different contexts.
## What it does
SlideForge takes LaTeX-formatted academic papers and automatically converts them into well-structured presentation slides. It extracts key content such as equations, figures, and citations, then organizes them into a customizable slide format. Users can easily adjust the presentation based on the intended audience—whether it’s for peers, students, or the general public. The platform provides customizable templates, integrates citations, and minimizes the time spent on manual slide creation.
## How we built it
We built SlideForge using a combination of Python for the backend and JavaScript with React for the frontend. The backend handles the LaTeX parsing, converting key elements into slides using Flask to manage the process. We also integrated JSON files to store and organize the structure of presentations, formulas, and images. On the frontend, React is used to create an interactive user interface where users can upload their LaTeX files, adjust presentation settings, and preview the output.
## Challenges we ran into
One of the biggest challenges we faced was ensuring that the LaTeX parser could accurately extract and format complex equations and figures into slide-friendly content. Maintaining academic rigor while making the content accessible to different audiences also required a lot of trial and error with the customizable templates. Finally, integrating the backend and frontend in a way that made the process seamless and efficient posed technical hurdles that required collaboration and creative problem-solving.
## Accomplishments that we're proud of
We’re proud of the fact that SlideForge significantly reduces the time required for researchers to create professional presentations. What used to take hours can now be done in minutes. We’re also proud of the adaptability of our templates, which allow users to target different audiences without needing to redesign their slides from scratch. Additionally, the successful integration of LaTeX parsing and slide generation is a technical achievement we’re particularly proud of.
## What we learned
Throughout this project, we learned a lot about LaTeX and how to parse and handle its complex structures programmatically. We also gained a deeper understanding of user experience design, ensuring that our platform was both intuitive and powerful. From a technical standpoint, integrating the backend and frontend and ensuring smooth communication between the two taught us valuable lessons in full-stack development.
## What's next for SlideForge
Next, we plan to expand SlideForge’s functionality by adding more customization options for users, such as advanced styling and animation features. We’re also looking into integrating cloud storage solutions so users can save and edit their presentations across devices. Additionally, we hope to support more document formats beyond LaTeX, making SlideForge a universal tool for academics and professionals alike.
|
## Inspiration
A brief recap of the inspiration for Presentalk 1.0: We wanted to make it easier to navigate presentations. Handheld clickers are useful for going to the next and last slide, but they are unable to skip to specific slides in the presentation. Also, we wanted to make it easier to pull up additional information like maps, charts, and pictures during a presentation without breaking the visual continuity of the presentation. To do that, we added the ability to search for and pull up images using voice commands, without leaving the presentation.
Last year, we finished our prototype, but it was a very hacky and unclean implementation of Presentalk. After the positive feedback we heard after the event, despite our code's problems, we resolved to come back this year to make the product something we could actually host online and let everyone use.
## What it does
Presentalk solves this problem with voice commands that allow you to move forward and back, skip to specific slides and keywords, and go to specific images in your presentation using image recognition. Presentalk recognizes voice commands, including:
* Next Slide
+ Goes to the next slide
* Last Slide
+ Goes to the previous slide
* Go to Slide 3
+ Goes to the 3rd slide
* Go to the slide with the dog
+ Uses google cloud vision to parse each slide's images, and will take you to the slide it thinks has a dog in it.
* Go to the slide titled APIs
+ Goes to the first slide with APIs in its title
* Search for "voice recognition"
+ Parses the text of each slide for a matching phrase and goes to that slide.
* Show me a picture of UC Berkeley
+ Uses Bing image search to find the first image result of UC Berkeley
* Zoom in on the Graph
+ Uses Google Cloud Vision to identify an object, and if it matches the query, zooms in on the object.
* Tell me the product of 857 and 458
+ Uses Wolfram Alpha's Short Answer API to answer computation and knowledge based questions
Video: <https://vimeo.com/chanan/calhacks3>
## How we built it
* Built a backend in python that linked to our voice recognition, which we built all of our other features off of.
## Challenges we ran into
* Accepting microphone input through Google Chrome (people can have different security settings)
* Refactor entire messy, undocumented codebase from last year
## Accomplishments that we're proud of
Getting Presentalk from weekend pet project to something that could actually scale with many users on a server in yet another weekend.
## What we learned
* Sometimes the best APIs are hidden right under your nose. (Web Speech API was released in 2013 and we didn't use it last year. It's awesome!)
* Re-factoring code you don't really remember is difficult.
## What's next for Presentalk
Release to the general public! (Hopefully)
|
## Inspiration
As STEM students, many of us have completed online certification courses on various websites such as Udemy, Codeacademy, Educative, etc. Many classes on these sites provide the user with a unique certificate of completion after passing their course. We wanted to take the authentication of these digital certificates to the next level.
## What it does
Our application functions as a site similar to the ones mentioned earlier; providing users with a plethora of certified online courses, but what sets us apart is our creative use of web3, allowing users to access their certificates directly from the blockchain, guaranteeing their authenticity to the utmost degree.
## How we built it
For our frontend, we created out design in Figma and coded it using the Vue framework. Our backend was done in Python via the Flask framework. The database we used to store users and courses as SQLite. The certificate generation was accomplished in Python via the PILLOW library. To convert images in NFTs, we used Verbwire for their easy to use minting procedure.
## Challenges we ran into
We ran into quite a few challenges throughout our project. The first of which was the fact that none of us had any meaningful web3 experience . Luckily for us, Verbwire had a quite straightforward minting process and even generated some of the code for us.
## Accomplishments that we're proud of
Although our end result is not everything we dreamt of 24 hours ago, we are quite proud of what we were able to accomplish. We created quite an appealing website for our application. We creating a python script that generates custom certificates. We created a powerful backend capable of storing data for our users and courses.
## What we learned
For many of us, this was a new and unique collaborative experience in software development. We learned quite a bit on task distribution and optimization as well as key takeaways for creating code that is not only maintainable, but also transferable to other developers during the development process. More technically, we learned how to create simple databases via SQLite, we learned how to automate image generation via Python, and learned the steps of making a unique and appealing front-end design, starting from the prototype all the way to the final product.
## What's next for DiGiDegree
Moving forward, we would like to migrate our database to Postgres to handle higher traffic. We would also like to implement a Redis cache to improve hit-ratio and speed up search times. We also like to populate out website with more courses and improve our backend security by abstracting away SQL Queries to protect us further from SQL injection attacks.
|
winning
|
## Inspiration
What our team noticed is that Large Language Models (or LLMs for short) have difficulty with domain-specific questions. However, a problem with making LLMs useful for domain-specific questions is the sheer amount of data that you have to feed in order to fine tune the LLM so that it is accurate. Finding this amount of information in order to train bigger models in this manner not only costs significant manpower but can be environmentally detrimental because training is known to be an energy-intensive process with a staggering carbon footprint. Therefore, we propose a new method of fine-tuning LLMs via a process called Synthetic Tuning in order to vastly improve the efficiency of fine tuning LLMs for domain-specific questions.
## What it does
Synthetic tuning works by fine tuning a larger LLM in order to take a small sample dataset and generate synthetic data. The synthetic data generated can then be used to fine tune a smaller LLM. The larger LLM will be known as the synthetic model (SM) and is only trained once using the Together.ai api. The synthetic model is given a use case and a sample dataset which the synthetic model then uses to generate a large amount of synthetic data. The use case will be reused later and will be the purpose of the variable model. The sample data set must be large enough that the synthetic model can learn associations necessary for generating synthetic data. The synthetic data from the SM is then used to fine tune an untuned smaller LLM for the use case specified in synthetic data generation This smaller model will be known as the variable model (VM) and is fine tuned for every new use case.
## Challenges we ran into
One of the main challenges that we ran into was diversifying the data that we fed into the synthetic model so that the synthetic model itself would be applicable to a wide range of use cases. One way that we accomplished this was to take in pre-existing data scraped from vast databases available on the web that corresponded to the particular use cases we wanted to test on (Health, Finance, and Consumer Goods).
## Accomplishments that we're proud of
One thing that we are proud of is that we were able to effectively fine tune Llama 7b to synthesize data for subsequent fine tuning. We were also able to identity at least 3 use cases that our algorithm would be useful for which involve medical, financial, and product data. Some of the next steps in order to improve our algorithm is to create a more powerful synthetic model and improve the tuning data which is passed to each of the variable models.
|
## Background
Automobile accidents happen every minute of every day, and they are one of the leading causes of injury and death in the US. In 2016, an average of 102 people died every day as a result of a car crash. Throughout the years, safety measures like seat belts, airbags, and AI have helped dramatically in reducing fatalities, but not nearly enough.
## Inspiration
We wanted to gain an understanding of these accidents and use a data approach to help solve some problems. We were really inspired by NYC Open Data, so we wanted to explore the dataset of all car collisions since 2013 and potentially learn some cool things.
## What it Does
It lets users view and interact with a visualization of NYC car collisions.
## How I Built it
First, we worked to access the database of motor vehicle collisions, which is accurately accounted for by the NYPD and can be found on the City of New York page. We converted the data from its raw form into a condensed sheet for analysis and into a json file to work with D3 and numpy. Finally, we overlayed files of the locations of accidents on top of the map of NYC.
## What I Learned & Challenges
It was difficult to parse and find/modify a geojson file for the visualization. We learned a lot about how to use D3 and geojson.
## What's next for Visualizing NY Car Crashes
We anticipate broadening our visualization to include other factors like car types and the reason for the crash. Also, adding a simulation of accidents throughout time would be interesting and expanding to other cities and areas.
|
## Inspiration
Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb!
## What it does
The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge".
## How we built it
**The Explosive**
The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates.
**The Code**
Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation.
Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol.
The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication.
## Challenges we ran into
Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided.
Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process.
Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below:
## Accomplishments that we're proud of
During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module.
## What we learned
* Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules.
* Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development.
* Enhancing overall group performance is achieved by assigning individual tasks.
## What's next for Keep Hacking and Nobody Codes
* Ensure the elimination of any unwanted noises in the wiring between the main board and game modules.
* Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players.
* Release the game to a wider audience, allowing more people to enjoy and play it.
|
losing
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.